Infographic7 min read

Anatomy of a self-driving car

Strip the body off and what's underneath? A small data center on wheels — with backups for the backups.

The big idea

Four systems on top of a normal car

A self-driving car is a regular car with four extra systems bolted on. Underneath the headline is a normal vehicle — engine, suspension, body. Above it sits an entirely separate stack:

Senses to perceive the world, a brain to decide what to do, a memory of every road it's allowed to drive, and hands and feet — software-controlled steering and brakes — to actually do it. And every safety-critical part of all four is duplicated.

1Sensescameras, lidar, radar, ultrasonic2Braincompute that turns data into decisions3MemoryHD maps of every road it drives4Hands & feetdrive-by-wire steering and brakes

What it sees

Where the sensors live

Most self-driving cars place sensors in roughly the same spots, for the same reasons. The roof gets the long view (lidar spinning a 360° scan). The front bumper gets what's directly ahead (cameras and forward radar). The mirrors and corners watch the blind spots. Ultrasonic sensors line the perimeter for the last few meters.

Want the deep dive on what each sensor type actually does? Read the sensors infographic.

Roof lidarFront camerasFront radarSide camerasRear radar + cameraUltrasonic

What it sees, side-on

The same sensors, from the side

Roof lidar — 360° scanRadarCameraBackup computeMain compute

How it thinks

A four-stage pipeline that runs ten times per second

Behind every steering input is the same four-step loop. Each stage hands its answer to the next. If any stage fails, the system has to know — fast.

1Perception
"There’s a person, a bus, a stop sign."
2Prediction
"Where will those things be in 1, 2, 5 seconds?"
3Planning
"Given all that, what’s the safest path?"
4Control
"Turn 3°. Ease off accelerator. Now."
The whole loop runs about 10 times per second

Pipeline in action

From ‘sees a pedestrian’ to ‘applies brake’

What does that loop actually look like in a moment that matters? Walk through what happens when someone steps off the curb in front of the car.

The whole sequence — see, label, predict, plan, act — fits inside a single human reaction time. By the time a human passenger has even noticed the pedestrian, the car has already eased off the accelerator.

  1. 1
    T = 0.0 sSensors

    Camera + lidar see a person at the curb, 8 m ahead.

  2. 2
    T = 0.1 sPerception

    Pipeline labels the object: "pedestrian, on sidewalk."

  3. 3
    T = 0.2 sPrediction

    "She is looking down — she might step off the curb."

  4. 4
    T = 0.3 sPlanning

    Slow to 15 mph; pre-charge the brakes; widen the gap.

  5. 5
    T = 0.4 sControl

    Eases off accelerator. Stops the throttle.

  6. 6
    T = 0.6 sLoop again

    She steps off. Re-perceives. Re-plans: full stop.

  7. 7
    T = 0.8 sControl

    Brakes applied. Car stops 1 m short of her.

What it remembers

HD maps — driving with the cheat sheet

Most self-driving services don't operate blind. Before they enter a city, mapping vehicles drive every road and build a centimeter-accurate 3D model of every lane line, curb, and sign. When the car is actually driving, it compares what it sees in real time against that pre-built map. The result: it always knows exactly which lane it's in, where the next stop sign is, and what the road geometry will be 200 m ahead.

1. LIVE LIDAR SCAN+STOP2. PRE-BUILT HD MAP=✓ KNOWS LANE,POSITION, GEOMETRY3. MATCHED

Two of everything that matters

Backups for the backups

A self-driving system that fails halfway through a turn is much more dangerous than one that doesn't try. So safety-critical components are duplicated: two compute units, two power supplies, two steering actuators, two braking systems. If a primary fails, the backup takes over and the car pulls over safely instead of stopping in traffic.

PRIMARYBACKUPComputeCompute Aif A failsCompute BPowerPower Aif A failsPower BSteeringSteering Aif A failsSteering BBrakesBrakes Aif A failsBrakes B