From Asterisk Magazine, Issue 13, March 2026:
Picture a fall afternoon in Austin, Texas. The city is experiencing a sudden rainstorm, common there in October. Along a wet and darkened city street drive two robotaxis. Each has passengers. Neither has a driver.
Both cars drive themselves, but they perceive the world very differently.
One robotaxi is a Waymo. From its roof, a mounted lidar rig spins continuously, sending out laser pulses that bounce back from the road, the storefronts, and other vehicles, while radar signals emanate from its bumpers and side panels. The Waymo uses these sensors to generate a detailed 3D model of its surroundings, detecting pedestrians and cars that human drivers might struggle to see.
In the next lane is a Tesla Cybercab, operating in unsupervised full self-driving mode. It has no lidar and no radar, just eight cameras housed in pockets of glass. The car processes these video feeds through a neural network, identifying objects, estimating their dimensions, and planning its path accordingly.
This scenario is only partially imaginary. Waymo already operates, in limited fashion, in Austin, San Francisco, Los Angeles, Atlanta, and Phoenix, with announced plans to operate in many more cities. Tesla Motors launched an Austin pilot of its robotaxi business in June 2025, albeit using Model Y vehicles with safety monitors rather than the still-in-development Cybercab. The outcome of their competition will tell us much about the future of urban transportation.
The engineers who built the earliest automated driving systems would find the Waymo unsurprising. For nearly two decades after the first automated vehicles emerged, a consensus prevailed: To operate safely, an AV required redundant sensing modalities. Cameras, lidar, and radar each had weaknesses, but they could compensate for each other. That consensus is why those engineers would find the Cybercab so remarkable. In 2016, Tesla broke with orthodoxy by embracing the idea that autonomy could ultimately be solved with vision and compute and without lidar — a philosophical stance it later embodied in its full vision-only system. What humans can do with their eyeballs and a brain, the firm reasoned, a car must also be able to do with sufficient cameras and compute. If a human can drive without lidar, so, too, can an AV… or so Tesla asserts.
This philosophical disagreement will shortly play out before our eyes in the form of a massive contest between AVs that rely on multiple sensing modalities — lidar, radar, cameras — and AVs that rely on cameras and compute alone.
The stakes of this contest are enormous. The global taxi and ride-hailing market was valued at approximately $243 billion in 2023 and is projected to reach $640 billion by 2032. In the United States alone, people take over 3.6 billion ride-hailing trips annually. Converting even a fraction of this market to AVs represents a multibillion-dollar opportunity. Serving just the American market, at maturity, will require millions of vehicles.
Given the scale involved, the cost of each vehicle matters. The figures are commercially sensitive, but it is certainly true that cameras are cheaper than lidar. If Tesla’s bet pays off, building a Cybercab will cost a fraction of what it will take to build a Waymo. Which vision wins out has profound implications for how quickly each company will be able to put vehicles into service, as well as for how quickly robotaxi service can scale to bring its benefits to ordinary consumers across the United States and beyond....
....MUCH MORE