A new inference technique based on deep learning neural networks leverages differences in light and shadow to create highly-detailed 3D renderings from 2D photos. In addition to making facial mapping less demanding and costly, this technology has immediate applications in VR, AR and even law enforcement.
Central to the JFK Assassination was the infamous Backyard Photo of Lee Harvey Oswald holding a Marxist newspaper and the same rifle model purportedly used to shoot the president.
Oswald himself protested that the photo was a fake concocted to frame him. Based on alleged lighting and shadow inconsistencies in the photo, generations of conspiracy theorists have agreed.
However, science recently debunked this theory using 3D imaging.
Now, using a data-driven inference method based on deep learning neural networks, science can go a step further and actually create 3D renderings of faces from a single (and even blurry) 2D image.
What’s Wrong With Conventional Facial Mapping?
Accurate and detailed facial mapping is complicated. It is a painstaking and costly endeavor that requires multiple shots from multiple angles in consistent lighting.
In other words, mapping someone’s face using conventional techniques requires documenting every detail of the face. These images must then be compiled to create a 3D rendering.
How Deep Neural Networks Literally Fill in the Gaps
Like the Backyard Photo conspiracists, this new technique pays special attention to the light and shadows contained in a given image. From them, it generates a comprehensive 3D rendering.
A team of researchers from the USC Institute for Create Technologies has developed an inference technique based on deep neural networks that evaluates the differences in light and shadow in a 2D image in order to create highly-detailed 3D rendering.
Using a database of faces, the team trained deep neural networks to compare, refine and optimize specific facial features down to details like stubble and pores.
Beyond handling high-resolution textures, the team’s network is even capable of producing them – something the team boasts is currently not possible using other deep neural networks.
Moreover, this technique capable of generating 3D facial representations in never-before-seen detail even from blurry or out-of-focus images.Now, using a data-driven inference method based on deep learning neural networks, science can go a step further and actually create 3D renderings of faces from a single (and even blurry) 2D image.Click To Tweet
Worth a Thousand Applications
The team’s inference technique based on deep neural networks has immediate applications in a variety of fields.
Most immediately, it could be used to generate more realistic and more compelling avatars for AR, VR games, and movies.
Anthropology is another discipline that could make immediate use of this technology in combination with functional magnetic resonance technology, for example. The USC team’s technique could help produce more realistic and more comprehensive renderings of human remains and other artifacts, or to help create accurate renderings of subterranean finds yet to be excavated.
Finally, we’re all familiar with police sketches of suspects. It is incredible work that law enforcement artists do. The USC team’s innovation could supplement artists’ sketches by generating the most detailed representation of a suspect possible.