If something has given us technology over time, it has been increasingly powerful cameras with the ability to record images of great quality in terms of color and sharpness.
This has made cameras a very useful resource, not only for capturing people and landscapes, but also for carrying out experiments or projects, such as generating a 3D model of a person, animating a 2D avatar, or generating an invisible keyboard. from the smartphone.
This is how the development of a project focused on recording videos was recently announced where it is possible to move in the same way as if we were in a virtual reality video, also having the possibility of changing position and viewing the same video. from a different perspective.
How is this possible? Thanks to a total of 46 cameras attached to a huge ring, which are responsible for capturing the scenes, all this under a new system developed by a group of Google researchers.
This effort has resulted in the creation of a new technique by Google called Immersive Light Field Video, which was presented at SIGGRAPH 2020, an event carried out by an international non-profit organization designed to present the best in proposals aimed at the computer graphics sector and techniques focused on interactivity.
Thanks to this system it is possible to obtain scenes of total immersion of which it is that it serves as an example reminds us of the iconic scene from the movie Blade Runner.
A 3D representation
Although it may seem like a complicated process, achieving an immersive scene is a relatively easy task to execute, since we can replicate it even with a smartphone.
To achieve this, it is only necessary to have multiple cameras and use them to capture a specific place or, also, we can use a camera with 360-degree capture capacity.
After having made the capture, the next step to follow is to join the images and superimpose them with the purpose of obtaining a single one, which we will manipulate to adjust the angle and have different views.
In the case of Google cameras, these act differently, starting with the fact that they do not capture 360 ββdegrees but 180 degrees.
Then, the images obtained are not directly superimposed to create the effect of being a single image, but are processed by a DeepView algorithm supported by artificial intelligence to finally assemble the entire scene in a 3D environment. Although the truth is that in the end what we see is the original video under a 3D representation.
How could this be possible?
The key lies in the precision with which this technique is carried out, which makes the trick go unnoticed in front of our eyes.
It’s all about the proper handling of lights and details where, for example, the system can manage to represent reflections on polished surfaces or in water.
In the same way, the system does not present a problem in recreating the movement generated by loose objects or clothes.
Entering the official website of the study it is possible to observe some examples that demonstrate how this system works.
Despite being prepared to be experienced in virtual reality, the visualization of these videos can also be carried out from a browser, with Google Chrome being the appropriate option to do so, but not without first activating the experimental functions of the browser through chrome: // flags.