Nature 2026
We repurpose smartphone-grade consumer LiDAR for real-time, handheld, and practical non-line-of-sight imaging.
Abstract
LiDAR sensors are rapidly becoming ubiquitous in consumer technology, appearing in devices such as the Apple iPhone Pro, Apple Vision Pro, Waymo self-driving cars, home robots, and more. Once confined to specialized industrial systems, depth-sensing hardware is now embedded in everyday consumer platforms.
We show that these consumer LiDARs can do more than measure visible depth—they can also see hidden objects around corners. Using smartphone-grade hardware, we demonstrate non-line-of-sight (NLOS) 3D reconstruction, tracking, and camera localization.
This work transforms off-the-shelf consumer LiDAR into plug-and-play NLOS imaging systems. In doing so, this work brings NLOS imaging out of the lab and into the hands of everyday users. Democratizing this capability opens the door to a new generation of applications in robotics, mobile perception, AR, and beyond.
Consumer LiDAR sensors can achieve picosecond timing resolution. That is how much time it takes light to travel a few centimeters!
The LiDAR illuminates a wall and light bounces in all directions.
Some light directly returns to the sensor. The time it takes for the light to return encodes the depth to the wall. This is the standard use case for LiDARs
Some of the light will bounce around the corner, hit the hidden object, and return to the sensor. This light can be used to get the shape and position of the hidden object.
Tracking an object in real-time (30 Hz). Useful for detecting motion around blind corners and collision avoidance.
Consumer LiDAR unlocks a new class of non-line-of-sight sensing capabilities for safety, robotics, AR, and more without requiring specialized lab hardware.
Detect and track people or vehicles hidden around corners before they enter the field of view—critical for autonomous driving and robotics.
Use hidden scene geometry as passive landmarks to localize a moving camera in environments where GPS and visual odometry fail.
Recover the 3D shape of occluded objects for AR, search-and-rescue, and inspection. In AR, consumer LiDAR can enable the headset to see the user's leg.
Even when an object is around a corner, tiny amounts of light still scatter off nearby surfaces like walls and floors. We showed that the LiDAR sensors already built into consumer devices can measure those faint reflections and use them to recover information about hidden objects.
Modern consumer LiDAR systems are surprisingly sophisticated. They can measure the arrival time of light with extremely high precision — down to tiny fractions of a nanosecond. That timing information contains much more information about the world than just visible depth mapping, and we’re beginning to unlock some of those capabilities computationally.
Today’s phones weren’t designed specifically for this application, and there are still significant limitations in range, resolution, and robustness. But what’s exciting is that the core sensing hardware already exists in many consumer devices, which means future systems could potentially build on that foundation with little to no modification of the hardware.
Technologies like cameras, GPS, and depth sensing all started as specialized research systems before becoming part of everyday life. Around-the-corner sensing could eventually improve safety, accessibility, robotics, wearable computing, and how devices understand the spaces around us. What’s exciting is that these capabilities may ultimately emerge on low-cost devices people already use every day.
We’re still early, but what’s exciting is that this capability has moved from specialized laboratory equipment to consumer-grade sensors. That’s a major step toward real-world deployment, but there are still many important research challenges to solve before this becomes a robust commercial technology.
For example, current systems still struggle in extremely low-light and high-noise conditions, which are very common in real environments. Handling completely unknown motion — where both the camera and hidden objects are moving unpredictably — is another major challenge. There’s also a lot of exciting work ahead in combining these measurements with RGB cameras and other sensors, improving reconstruction quality and reliability, and integrating these systems into real-time robotic, automotive, and wearable platforms.
More broadly, we’re still learning how much information can be extracted from indirect light in everyday environments. I think this field is still at a very exciting stage where there are many open problems spanning physics, sensing hardware, computer vision, machine learning, and robotics. We are always looking for collaborators who are excited in working on these topics.
We have a lot of exciting work in progress — and we’re always looking for motivated collaborators to help expand it into new domains.
Working on computational imaging, computer vision, robotics, or sensing? We’d love to hear from PhD students, postdocs, and faculty interested in joint projects.
Building products in autonomous vehicles, robotics, AR/VR, or smart devices? We’re open to partnerships that translate these capabilities into real-world systems.
Have a domain where around-the-corner sensing could make a difference — healthcare, accessibility, search and rescue? Let’s explore it together.
Interested? We’d love to hear from you.
Get in TouchSiddharth Somasundaram and Aaron Young gratefully acknowledge funding support from the National Science Foundation (NSF) Graduate Research Fellowship Program (grant no. 2141064). Adithya Pediredla is supported by the NSF (grant no. 2326904).