I recently commented on IGTV about an application that allows us to give photos a three-dimensional effect. It is about LucidPix, and although the effect is interesting, it is not close to what I am going to comment on today.
A group of Facebook researchers have been working on a system so that people can take three-dimensional photos in seconds, from the mobile.
It is a system for creating and viewing three-dimensional photos that will be publicly displayed in detail at SIGGRAPH 2020. The conference, which will take place virtually this year from August 17, brings together a diverse network of professionals who approach graphics through computer and interactive.
The photographic technique 2-D-to-3-D has been available as a photo feature on Facebook since late 2018. To use it, Facebook users have to capture photos with a phone equipped with a dual camera lens. Now, the Facebook team has added an algorithm that automates depth estimation from the 2-D input image, and the technique can be used directly on any mobile device, expanding the method beyond the Facebook app. and without the requirement of having a double camera with a lens.
Johannes Kopf is the main author of the work and a research scientist at Facebook, who assures us that with this system we will have 3D photography so that the photos feel much more alive and real.
Users will be able to access the new technology through their own mobile device, and from there they will see the real-time conversion of a 2D input image to 3D, without the need for photography skills on the part of the user.
To achieve this, they explain:
[…] trained a convolutional neural network (CNN) on millions of pairs of 3-D public images and their depth maps and mobile optimization techniques developed by Facebook AI. The frame also incorporates texture painting and geometry capture from the 2-D input image to convert it to 3-D, resulting in images that are more active and animated. Every automated step that converts a user’s 2-D photo, directly from their mobile device, is optimized to run on a variety of makes and models and can work with the limited memory and data transfer capabilities of one device.
They are now investigating machine learning methods that allow high-quality depth estimation for videos taken with mobile devices, so things are just getting started.
In the meantime, we can continue playing with LucidPix, as I show you here: