The technology uses camera images to generate a point cloud, or three-dimensional coordinate perception, of what surrounds a vehicle.
While lidar is commonplace on self-driving cars and trucks, its size and cost are hurdles to use in other vehicles. Volvo plans to build vehicles equipped with lidar by Luminar this year. General Motors plans to install lidar sensors from Cepton on up to nine models next year.
With VIDAS, “the car can better understand the world around it,” Jason Devitt, CEO of Compound Eye, told Automotive News. “A big change occurs when the car can capture the world in 3D.”
Compound Eye’s technology requires two cameras to identify and determine the distances between objects. It relies on two cues from the cameras:
1. Parallax, when nearby objects appear to move but far away objects do not, as in looking out the window of a moving car.
2. Semantic, how humans discern the size of an object based on others objects in view to determine proximity.
Devitt said the system fuses real-time parallax and semantic cues captured by a car’s cameras. This information is combined using computer vision and machine learning algorithms to determine distances to objects without the aid of radar and lidar sensors.
VIDAS feeds the data to a car’s driver-assistance system. It also can be used by autonomous vehicles. The system is compatible with most existing automotive cameras, works with current automotive computing platforms and software and doesn’t require proprietary hardware.
The development kit offered to automakers and suppliers includes two reference cameras, a compute module, GPS antenna and connecting and power cables.