With the help of a newly-developed laser technology, driverless vehicles could soon be able to see objects hidden around corners
Researchers from Stanford University have reportedly developed a new laser technology that could enable future autonomous cars to see objects that are not directly visible.
Most driverless vehicles manufactured today use light imaging, detection and ranging technology (LIDAR) to understand their surroundings. LIDAR works by analyzing the light that bounces off objects around or near it to know their location and visualize what these objects look like.
LIDAR is technically based on the concept of echolocation commonly used by animals like bats, dolphins, and whales to hunt for food and detect threats.@Stanford researchers have developed a laser technology that could revolutionize the capability of autonomous vehicles to detect hidden objects on the road.Click To Tweet
The information gathered by LIDAR is used by autonomous cars to analyze the differences between objects in their paths such as other vehicles, markings, and people. The only downside to this technology is that it can’t visualize objects that are not within its direct line of sight.
This means that if the obstruction is located around the corner, there’s no way for the LIDAR to detect it and warn the vehicle of an impending collision. With this in mind, the Stanford researchers have developed a highly sensitive laser technology that could be able to see objects around corners.
Stanford’s New Laser Technology
In their study, published in the journal Nature, the researchers described how they improved NLOS imaging to develop their laser technology.
NLOS imaging works by reconstructing the shape and albedo of hidden objects from scattered light.
This is a far cry from LIDAR which uses measurements to visualize the shape of objects from direct reflections. However, NLOS has its limitations, including prohibitive memory and the processing requirements of its reconstruction algorithm.
The team reportedly used the confocal scanning procedure to address this problem.
“Here we show that a confocal scanning procedure can address these challenges by facilitating the derivation of the light-cone transform to solve the NLOS reconstruction problem,” the researchers wrote in their paper.
“This method requires much smaller computational and memory resources than previous reconstruction methods do and images hidden objects at unprecedented resolution.”
The Stanford group is not alone in developing methods to reconstruct images of hidden things through laser technology. However, their work is considered extremely efficient due to the design of their algorithm to process the final images produced by NLOS.
“A substantial challenge in non-line-of-sight imaging is figuring out an efficient way to recover the 3-D structure of the hidden object from the noisy measurements,” David Lindell, a graduate student in the Stanford Computational Imaging Lab and co-author of the paper, was quoted as saying. “I think the big impact of this method is how computationally efficient it is.”
During their experiment, the researchers placed a laser next to a highly sensitive photon detector capable of recording every single particle of light. Then, they shot invisible pulses of laser light at a wall. The pulses reportedly bounced off objects around the corner and then bounced back to the wall and into the detector.
After finishing the scan, the algorithm then untangled the paths of the captured photons, allowing the blurry blob to take a more precise form.
Depending on certain conditions like the lighting and reflectivity of the hidden objects, the scanning used to take anywhere from 2 minutes to up to an hour.
Now, with the algorithm developed by the researchers, they can untangle these images in less than a second.
The algorithm is said to be so efficient it can be run using a regular laptop. The researchers are also confident that they could still speed up their laser technology’s algorithm to make it almost instantaneous.
“We believe the computation algorithm is already ready for LIDAR systems,” said Matthew O’Toole, a postdoctoral scholar in the Stanford Computational Imaging Lab and co-lead author of the paper. “The key question is if the current hardware of LIDAR systems supports this type of imaging.”
Right now, the team is working on improving their system’s capability to handle real-world challenges and scan objects more quickly and efficiently. Before it can be deemed road ready, the new laser technology has to work better in daylight and with moving objects.
“This is a big step forward for our field that will hopefully benefit all of us,” Wetzstein said. “In the future, we want to make it even more practical in the ‘wild.'”