EveryPoint Gets Hands-On with Apple’s New Lidar Sensor

EveryPoint
5 min readOct 23, 2020

--

Dr. Heinly’s scan test in an office break room

Apple just announced its much awaited new 2020 lineup of iPhones and to no surprise, Lidar has been added to the Pro model lineup. While Apple showcased the new sensor’s ability to improve focusing time and improved image capture from the three cameras, many laser scanning professionals were left wondering if this sensor is anything more than a toy.

Assuming the new Lidar sensor in the iPhone is the same as the one packed in the iPad Pro 2020, our team has spent a couple of months exploring everything this sensor can do. We sat down with our director of computer vision, Jared Heinly, Ph.D., to discuss his observations on Apple’s Lidar sensor and give us the insight into the the capabilities and limitations of what’s possible.

Follow Dr. Heinly’s observations on the iPad and iPhone live on his twitter feed: https://twitter.com/JaredHeinly

First Impressions

EveryPoint: Why do you think Apple decided to integrate a Lidar scanner in the iPad Pro and iPhone 12 Pro?

Dr. Heinly: Apple is sending a strong signal that they are serious about AR/VR. Although most marketing around the Lidar has been on camera improvements, the Lidar scanner is tailored to quickly compute spatial awareness of the local environment.

The combination of Lidar and ARKit will enable the devices to more precisely place objects virtually in the real world. Also, the mix of camera and Lidar information allows ARKit to determine the color and intensity of lighting in a scene. This greatly improves the reflectance on the surface of virtual objects thus giving a higher level of realism.

Lidar Data Quality

EveryPoint: What was your initial reaction with the quality of data coming from Apple’s Lidar sensor?

Dr. Heinly: At first, the Lidar sensor felt like a toy. It is obvious that Apple is not trying to make a sensor for commercial applications. I was surprised that the sensor performed equally well outside and inside. The most notable observation is the data was heavily processed. Also, the sensor got confused with fine detailed objects.

Dr. Heinly’s observations with the Lidar in an outdoor environment

EveryPoint: What can you tell us about how Apple heavily processes the data?

Dr. Heinly: One of the advantages Apple has is that they own the entire hardware and software stack. This means they are able to use the Lidar, camera, and inertial sensors to make decisions on how to best process the Lidar data for photography and AR applications. A good example of this in action is the temporal smoothing of surfaces as the scanner passes over objects multiple times. The more times the sensor scans a wall, the smoother the point cloud became. You can clearly see this in the kitchenette scan I took at American Underground. As I passed areas more than once, not only did the point cloud become denser in those areas, but the noise was removed and surfaces smoothed.

Dr. Heinly scanning a kitchenette at American Underground

Another great example of this is when I scanned a small scene with a chair, blanket and wall. The sensor did an excellent job generating a flat surfaced wall. As I noted in my twitter post, I suspect this is Apple’s processing in action.

Dr. Heinly’s scan of a small scene

EveryPoint: You mentioned the scanner has a difficulty capturing small details. Can you provide examples of this in action?

Dr. Heinly: Yes, one of the first scanned I performed with the iPad Pro included 3 small bushes. I was expecting more details on the edges of the bushes. However, the sensor captured a general shape of the bush.

I further tested this by scanning small structures on the ground. Only the general shape of the ground was captured in the final result. In situations like this example, EveryPoint’s photo based reconstruction is a better solution.

Dr. Heinly testing the Lidar sensor on small details

I took a closer look at the realtime output of the sensor data and found that Apple’s software filters out objects with low depth confidence. I tested this on a tree with small branch structures.

Dr. Heinly’s Lidar testing on a tree. Small branches were omitted due to low depth map confidence

EveryPoint: Do you have any other notable observations from the Lidar data you would like to share?

Dr. Heinly: Yes, I performed a few tests over long scan distances to see how well the Lidar handled drift. Overall, the sensor performed better than expected. I did several tests where I walked around city and neighborhood blocks and found small margins of drift that we can easily correct with EveryPoint’s reconstruction engine.

Drift in Lidar points over long distance

Data Output

EveryPoint: What output data were you able to obtain from the Lidar sensor?

Dr. Heinly: As I mentioned earlier, Apple heavily processes the Lidar depth data in real-time as I scanned various environments. Unfortunately, Apple does not provide the raw Lidar data output pre-processed. Nevertheless, there are still lots of exciting ways that this data can be leveraged and refined using our EveryPoint engine, so it’s great that at least some depth data has been made available.

What I found more interesting is that ARKit produces meshes of the environment as you scan. This is how Apple is able to make sense of the world spatially for placing objects virtually. The mesh that is produced does contain artifacts and lacks fine details. However, I envision the meshes could be useful for many applications that do not require detailed surface results.

3D model mesh from Dr Heinly’s car scan

What I am most excited about is that Apple is giving access to the live Lidar depthmaps. This data, when fused with our EveryPoint 3D Reconstruction Engine, will make it quicker and easier to process 3D Data, and provide a great complement to our existing image-based algorithms. We will be able to build many more real-time 3D modeling applications on the iPhone and iPad using this information, and generate results that are more accurate, complete, and robust.

3D model mesh from Dr Heinly’s building scan

Real-World Applications

EveryPoint: What real-world applications do you envision will come from having a Lidar scanner in device that fits in your pocket?

Dr. Heinly: That is a hard question to answer. Apple obviously added the scanner to their devices to improve photo quality and build more immersive AR/VR experiences.

At this point, my team and I at EveryPoint will continue to test the sensor’s abilities on both the iPad and iPhone. Our goal is to produce the highest quality 3D reconstructions in a relatively short amount of time to support solving real-world problems.

EveryPoint: Any guesses at what real world problems EveryPoint may solve next?

Dr. Heinly: Back in 2012 we had a great photogrammetry expertise and we set out to find problems to solve in about every industry you could think of. The reality is that Knife River reached out to us with a very specific problem and thought we might be able to solve it. That problem was measuring stockpile volumes with an iPhone.

Currently, we are exposing the world to our 3D reconstruction capabilities. We are also listening to various industries that express problems that we may be able to solve. We know the correct industry with a tough problem to solve will present itself. EveryPoint will be ready to tackle what’s next!

Learn More

Follow Dr. Heinly’s Twitter feed to stay on top of the latest tests and developments as he gets his hands on the new iPhone 12 Pro: https://twitter.com/JaredHeinly

Discover more about EveryPoint at http://everypoint.io

You can learn more about EveryPoint’s first commercial solution, Stockpile Reports, at https://www.stockpilereports.com

--

--

EveryPoint

Get World-class 3D reconstruction for your product. EveryPoint extracts intelligence from imagery.