AV Technology- Can’t We Have It All?

What the new iPhone 12 means for the future of Autonomous Driving, what Elon Musk has to say, and why it matters.

Motion Insurance
8 min readOct 21, 2020

By Lanaya Nelson

2020 iPhone 12 model rear view showing dual camera, LIDAR-equipped device for 3d scanning and augmented reality.

What does Elon Musk have to do with the new iPhone 12 Pro/Max other than Caviar’s deluxe limited ‘Musk Be On Mars’ design? (No, really, take a look… it’s pretty rad.) Well, Elon Musk made a few unassailable statements last year in response to the use of LiDAR (Light Detection And Ranging) for CAV (Connected and Autonomous Vehicle) technology. Historically, Musk has not been too far off from his predictions and projections and there is never a grain of salt to be found when he makes statements on technology. Will this time be any different?

“LiDAR is a fool’s errand. Anyone who relies on LiDAR is doomed. Expensive sensors that are unnecessary. They are all going to dump LiDAR, that’s my prediction, mark my words…”

Last week, Apple announced the introduction of its latest iPhone model, the iPhone 12 Pro and iPhone 12 Pro Max with 5G. The press release boast:

Best iPhone ever features the powerful A14 Bionic, all-new design with Ceramic Shield, pro camera system, LiDAR Scanner, and the biggest Super Retina XDR display ever on an iPhone.

This may result in an eye roll if you’re not one to need the latest and greatest, but there may be a real diamond hidden in this rough for the automotive industry.

To add, Apple’s iPad Pro was the first to include LiDAR which was launched last summer.

As this is a very exciting time for those, the many, that look forward to the latest and greatest in personal devices, it is also an exciting time for those, the few, that have integrated LiDAR technology and the information gathered from it into their machine learning algorithms for CAVs. Up to this point, the near future of LiDAR has been a bleak one in CAV advancement mostly due to its sheer size, both physically and monetarily. Up until 2017, the industry norm price for a single LiDAR was $75k. Waymo was able to drop this price down to about $7,500 with its proprietary commercialized LiDAR sensor. Companies like Waymo, a subsidiary of Alphabet Inc, rely on LiDar as its primary autonomous driving technology.

BUT, $7,500 is still $7,500. The estimated cost of the LiDAR in the iPhone 12 is between $50 — $75, which puts it in the price range of a quality radar (Radio Detection And Ranging) system.

While Waymo has made leaps and bounds to push the autonomous driving envelope this year with the announcements of its Volvo partnership and the release of the first ever commercial robotaxi service, it remains far behind in personal auto design and regulation. I mean… it’s not exactly the sexiest thing on the road, but it does resemble maybe a Tahoe with a Thule, if you’re into that kind of thing.

Point being: the current state of LiDar is BIG. And expensive. $7,500 is no small fee for manufacturing at market cost, especially when radar has become so relatively inexpensive and touts better information coming in… respectively.

So, who uses radar? Elon Musk and Tesla, of course.

Tesla’s autonomous technology, Autopilot, is made up of wide, main, and narrow forward cameras, forward-looking side cameras, rearward-looking side cameras, a rearview camera, radar, ultrasonic sensors, and a powerful in-vehicle processor/computer to process, analyze, and learn from real time observations, creating a system that emulates the human senses, reacting based on previous experience and intuition.

The problem? The algorithms don’t have enough experience to make this technology reliable and safe. Yet. And there are limitations to radar that LiDAR aims to fill.

What’s the difference?

The high level and less-than-accurate analogy to describe the difference between the two technologies with regards to autonomous driving is the difference in vision between predator and prey.

A predator has what is called ‘binocular vision’. This means that they are able to hyper focus, using both eyes, on objects using depth perception and highly acute details in their lines of vision. Binocular vision, in biology, is most concerned with locating prey. Prey animals and others have what is called ‘monocular vision’, which is purposed to be aware of danger coming from any directions and has a wide visual spread, though depth and detail is lacking.

To tie this together, there is a great race currently to get it right — autonomous driving that is reliable. Much like binocular vision, LiDAR can sense depth and details that monocular vision or cameras and radar cannot. However, cameras and radar bring large volumes of data and have a wider range of vision. The question at hand then:

Is it better to build models with sharper information coming in and less need for advanced algorithms? Or is it better to build models with more inputs that can best be defined by the complexity of the algorithms and have sharper outputs?

Time, depth, and precision will tell.

A better picture of the environmental reality or a better interpretation in which to react to the environment?

What Happened to the iPhone 12 and the door it opened?

Among Elon Musk’s reasons for LiDAR being an elementary use of technology to advance the CAV market are its size and cost. The new iPhone 12’s use case for LiDAR is improved photography and 3D AR (augmented reality) experience capabilities. What this could mean for the CAV market is that scalability is coming. Smaller and cheaper. And the likelihood that the market will create grounds for competition.

To be frank, the LiDAR in the iPhone 12 is far from the sophistication and capability of, say, a Velodyne LiDAR system (which is the $75k-a-pop that benchmarked the market a decade ago). The LiDAR in the phone has a fundamentally different construction and purpose. “It has no moving parts, but rather a bunch of lasers on wafers, and the sensors aren’t directly aligned to the lasers but use an array with single photon sensitivity… which means that they almost work, functionally-speaking, more like radar than Lidar,” explains Daniel Weisman, Chief Innovation Officer at Motion Auto.

What does that mean? More light from Daniel:

“The goal of the LiDAR in iPhones and iPads is to help the cameras quickly and accurately get to quicker focus, and to help Augmented Reality experiences by placing objects at the right depths in the field of vision. So, while the LiDAR is being used to determine depth, it isn’t trying to create millimeter-level accuracy that people’s lives need to depend upon.

Given the construction, we can also conclude that these LiDAR systems will have less range than traditional automotive sensors, but there’s no reason why that range can’t be cost effectively increased over future generations.

Also, given the construction and lack of alignment of individual lasers to individual sensors, we can conclude that this kind of LiDAR system will have some of the similar limitations of Radar. For example, interference from multiple vehicles with the same tech at an intersection, which can be managed by frequency modulation in radar or timed cycling of photons in the new LiDAR systems, but it doesn’t seem like any manufacturer is currently doing that… yet.

However, relying on LiDAR instead of Radar also introduces new problems like grime/dust obfuscating the signal (which is less problematic for radio frequencies, as Arno Penzias and Robert Woodrow Wilson discovered. It turned out that they weren’t listening to pigeon poop after all, but rather background radiation from the Big Bang, which resulted in their Nobel Prize).

One of the great things about this new LiDAR tech is that it can be included in the existing camera units cheaply and easily, and won’t require the same calibration and installation efforts that Radar might typically require. An example of Radar’s downfall: Jason, my cofounder, once arrived at his car in a parking lot to discover it had been nicked. It took $2k to fix a very minor fender bender because the radar needed to be replaced, and the installation and calibration of the radar was a specialty big-ticket item despite the radar itself likely only costing $75.

That said, the most interesting part about this new LiDAR tech is that it could be a great and cost-effective substitute for radar, not only as costs will further decrease, but because it can be part of the preexisting camera installation procedure, which will be less sensitive to calibration needs and part of a preexisting set of hardware, improving manufacturing, servicing, and replacement.

Conclusion is that the tech is interesting, but it’s not quite Velodyne caliber LiDAR and shouldn’t be treated as equivalent.”

…it sure is exciting to think about future possibilities.

Why Can’t We Have It All?

Does it make the most sense to combine forces here? To have the best picture of the environment’s reality and also the best interpretation of it to ensure safety on the roads? Will the salt be brought to Musk’s table soon enough? Will this bring us closer to 100% reliability of autonomous driving?

Or Is There Another Way Altogether?

When asked about this conundrum, Michael DeKort, CEO of Dactle shared:

“With regard to sensors and the drive for only camera use without LiDAR or radar, there is no reason engineers can’t develop both camera-only sensor technology and not enable major safety issues by forgoing LiDAR and/or good scanning radar technology. All that is required is to duplicate the camera data to two systems. One to ensure competent perception by fusing all of the sensor types. And the other to develop camera only technology. What may seem like a greater expense is actually significant risk mitigation through the avoidance of litigation [that will] cause flawed perception system induced accidents.”

The answers are just around the corner whether you’re paying attention or not.

--

--

Motion Insurance

Auto insurance dedicated to enhancing your lifestyle, protecting your privacy and, most importantly, to improving the safety of you and your loved ones.