Sensor Modalities: LiDAR, Radar, and Cameras
Autonomous driving systems rely on three primary sensor types, each with distinct strengths and limitations. LiDAR (Light Detection and Ranging) emits laser pulses (typically 905 nm or 1550 nm wavelength) and measures return times to build precise 3D point clouds at up to 300-meter range with centimeter-level accuracy. Modern solid-state LiDAR units (Luminar Iris, Hesai AT128) have no moving parts, cost under $500 in volume, and resolve pedestrians at 200 meters. Limitation: heavy rain, snow, and fog scatter laser pulses, degrading detection range by 30β70%. Radar (typically 76β81 GHz mmWave) excels in all weather conditions β rain, snow, fog, and dust have minimal impact β and can detect vehicle velocity via Doppler shift with Β±0.1 mph precision at ranges exceeding 250 meters. Limitation: low angular resolution makes distinguishing close objects or pedestrians from background clutter difficult. Cameras provide high-resolution 2D texture data essential for reading road signs, lane markings, and traffic lights β tasks that LiDAR and radar cannot perform. Forward cameras typically have 120Β° FOV and 8 megapixel resolution. Limitation: performance degrades in glare, darkness, rain on the lens, and occluded scenes. Tesla's camera-only (vision-only) approach uses 8 cameras at up to 250-meter range; Waymo, Cruise, and most OEM AV programs use all three modalities.