Last week, a couple of California Highway Patrol officers spotted a man sleeping in the driver’s seat of a Tesla Model S going 70 mph down Highway 101 in Palo Alto around 3:30 am, they moved behind the car and turned on their siren and lights. When the driver didn’t respond, the cops went beyond their standard playbook.
Figuring the Tesla might be using Autopilot, they called for backup to slow traffic behind them, then pulled in front of the car and gradually started braking. And so the Tesla slowed down, too, until it was stopped in its lane.
“Our officers’ quick thinking got the vehicle to stop,” says CHP public information officer Art Montiel. The officers arrested the driver, identified in a police report as 45-year-old Alexander Joseph Samek of Los Altos, for driving under the influence of alcohol.
Neither the cops nor Tesla has confirmed whether the Model S had Autopilot engaged at the time. It seems likely it was, though, since the vehicle was staying in its lane and responding to vehicles around it, even though its driver didn’t wake up until the cops knocked on his window.
Tesla clearly tells its customers who pay the extra $5,000 for Autopilot that they are always responsible for the car’s driving, and that they must remain vigilant at all times. Driving drunk is illegal. And the vehicle’s sorta-self-driving tech may have prevented a crash. But if Autopilot did allow a slumbering and allegedly drunk driver to speed down the highway, it brings up another question: Is Elon Musk’s car company doing enough to prevent human abuse of its technology?
It’s long-standing but still-relevant criticism. Last year, a National Transportation Safety Board investigation into the 2016 death of an Ohio man whose Tesla hit a semi-truck while Autopilot was engaged concluded that Tesla bore some of the blame. When the oncoming truck turned across the path of the Tesla, the sedan didn’t slow down until impact. “The combined effects of human error and the lack of sufficient system controls resulted in a fatal collision that should not have happened,” NTSB Chairman Robert Sumwalt said at the time.
Since then, Tesla has restricted how long a driver can go without touching the steering wheel before the receiving a warning beep. If they don’t respond, the system will eventually direct the car to stop and hit its hazard lights. That makes this incident a bit confusing, as Musk noted in a tweet:
The sensors in the steering wheel that register the human touch, though, are easy to cheat, as YouTube videos demonstrate. A well-wedged orange or water bottle can do the trick. Posters in online forums say they have strapped weights onto their wheels and experimented with Ziplock bags and “mini weights.” For a while, drivers even could buy an Autopilot Buddy “nag reduction device,” until the feds sent the company a cease-and-desist letter this summer.
All of which makes the design of similar systems offered by Cadillac and Audi look rather better suited to the task of keeping human eyes on the road, even as the car works the steering wheel, throttle, and brakes. Cadillac’s Super Cruise includes a gumdrop-sized infrared camera on the steering column that monitors the driver’s head position: Look away or down for too long, and the system issues a sharp beep. Audi’s Traffic Jam Pilot does the same with an interior gaze-monitoring camera.
Humans being human, they will presumably find ways to cheat those systems (perhaps borrowing inspiration from Homer Simpson) but it’s clear a system that monitors where a driver is looking is more robust for this purpose than one that can be fooled by citrus.
It’s possible Tesla will give it a shot. The Model 3 comes with an interior camera mounted near the rearview mirror, and though the automaker hasn’t confirmed what it’s for, don’t be surprised if an over-the-air software update suddenly gives those cars the ability to creep on their human overlords.