LiDAR, Radar, or Camera? Demystifying the ADAS / AD Technology Mix

A blog post by
Preeti Prasher, ASIC Test Engineer, LeddarTech®


The growth of advanced driver assistance systems (ADAS) and autonomous driving (AD) solutions is the catalyst for the adoption of several types of sensors being incorporated into vehicles. Radars, cameras, and ultrasonic sensors have become the industry standard, answering the call for advancements in road safety. And now, Light Detection and Ranging (LiDAR) technology has been added to the list and is actively being deployed in vehicles on the production line.



Why is LiDAR so significant?

One of the main advantages of LiDAR is that the light source is an integrated part of the solution. LiDAR sensors use an eye-safe laser to emit light pulses which light up the desired area. Unlike cameras, they function independently of the ambient lighting. LiDAR can achieve fantastic results day and night without any loss of performance due to disturbances such as shadows, sunlight or headlight glare.

LiDAR systems use the Time of Flight (ToF) principle – light is emitted at time t-0, hits an object, is reflected back, and is then measured by an array of sensors at time t-1. Based on knowledge about the speed of light, the measured interval – the ToF – can easily be converted into a precise distance. By considering how much light is returned, the size and shape of the object can also be determined.

Here’s an example to see how that works: If a light pulse hits a flat, reflective surface head-on, it is almost fully reflected. If the light pulse hits the same surface at a different angle, the light that strikes the closest side is reflected back with higher intensity than the light which strikes the side furthest away. Some of the emitted light pulses are not reflected back at all, or the reflections are not measured because the object is outside the range of capture.

Given that a LiDAR system, such as LeddarTech’s LeddarEngineTM, can process more than 1 billion samples per second, it is easy to see how the system can also determine an object’s shape, size, and movements. This means that the vehicle always has the very latest information about what is happening in the surrounding environment.

Different objects have different emission characteristics (for example, they reflect or absorb different quantities of light) does not pose any problem either, because within the applicable measurement range. This is taken into account by LeddarEngine‘s software and algorithms.

The vast number of measurement points in LiDAR technology results in a very accurate 3D reconstruction of a scene. Every light pulse emitted provides specific information about the relative distance and size of the detected object and allows the system to create a precise three-dimensional mapping of the environment.

In principle, this can also be achieved using cameras. However, LiDAR technology has advantages: Image analysis is much simpler and faster. With cameras, post-processing techniques have to be applied to the images to determine the size and relative distance of an object. By contrast, processing the output of a LiDAR sensor is very straightforward, in particular when using a LeddarEngine consisting of LeddarCore® ICs in combination with our patented software, LeddarSPTM.

Due to its scalable and versatile design, the LeddarEngine can support various LiDAR building blocks and used in a wide range of LiDAR sensor applications.

This has a direct impact on safety: “LiDAR can identify an object with near-perfect accuracy. This should not be compared  to a classification, but if there is an object in front of the sensor, LiDAR can detect how far away it is and how big it is. This information can then directly prompt certain actions, such as ‘slow down’ or ‘stop’. Camera images, on the other hand, require far more complex computation. This means that with LiDAR, actions can be triggered in fewer CPU cycles. For camera images, the system must first identify which of the data within an image is actually relevant.

All resolutions are possible

For instance, if more detailed 3D data is required, the resolution must be increased. High resolutions are especially important for autonomous driving, whereas lower resolutions are required for smart toll systems, for example. In this last case, LiDAR first detects the size of the moving vehicle and then activates a camera which records the license plate for the toll charge. Both cases demand a very high level of accuracy, and autonomous driving requires even finer detail. For LeddarTech, this is no problem because our design is highly scalable. If it is necessary to capture and analyze more light pulses, several LeddarEngine SOCs can easily be used in parallel.

Data acquisition takes place in parallel, so if more information is required, this only causes a slight increase in the computation time. We, therefore, have a greater quantity of useful data in almost the same time. In other words, market demand is the only factor which determines the limits. Our patented software provides a unique advantage because it enables us to reconstruct detailed 3D data from the measurement results to determine which objects are present in the environment and where they are located.

LiDAR, cameras, and radar as complementary technologies

Cameras require significantly more computing power. This can be challenging, given that the amount of available computing power is ever-increasing. This could lead some to ask why LiDAR should be integrated into future solutions. Highly efficient CPUs consume a lot of energy. This is why many OEMs choose to combine cameras, radar and LiDAR sensors so that computing power is not required to analyze all of the camera data.

On the other hand, we shouldn’t expect LiDAR to replace cameras altogether, because it has two disadvantages: LiDAR cannot detect colors or interpret the text. Consequently, it is extremely difficult, or even impossible, for LiDAR to identify traffic lights or road signs. Because camera-based sensors can recognize colors and read road signs using image processing techniques, such systems can trigger reactions to the red brake lights of other vehicles or stop signs for instance. A high-resolution LiDAR sensor with shape-recognition software could identify stop signs by their octagonal shape and trigger appropriate action, but in this respect, cameras have a distinct advantage.

When it comes to the utilization of radar in autonomous vehicles, LiDAR and camera both benefit from this antecedent technology: The operation of a camera sensor can be impaired by snow, rain, or fog. Such weather conditions also change the refractive index of the transmission medium and reduce the range of a LiDAR sensor. Resistance to weather conditions is one reason why radar is also incorporated in the design of most automotive sensor suites.

Incorporating the technology mix

Although no one technology fulfills the entire spectrum of market requirements, LeddarTech’s LeddarEngine offers unique advantages which tip the balance in favor of LiDAR-based solutions.

Furthermore, research into the strengths of the various types of sensors has shown that combining sensors is the best way of advancing ADAS and AD solutions in both commercial and consumer applications worldwide.


Have questions about how we can make mobility safer and more efficient?

Ask our experts