The dangers of driverless vehicles

Automation complacency is when the person at the wheel of a self-driving car puts their faith in AI and forgets the importance of human intelligence
amit bandre
amit bandre

If you’re at the wheel of a car, driving in the dark with your eyes on your cellphone as you stream your favourite TV serial, you make for a dangerous driver. That’s a truism that applies as much to self-driving cars as it does to regular ones, a lesson Rafaela Vasquez learned the hard way. She was the operator at the wheel of what is possibly the world’s first fatal robot car death in March last year. It resulted in  49-year-old Elaine Herzberg being mowed down as she was crossing a street with a bicycle in Tempe Arizona. 

While further investigations show that Herzberg was crossing outside of a designated pedestrian crosswalk and tested positive for drugs, these were not reason enough for her death. After all, no self-driving car manufacturer has ever said that killing careless pedestrians was a natural fallout of their technology. 

The car that caused the accident was a modified SUV being tested by Uber. Since the accident, Uber suspended the testing of self-driving cars for nine months. A year after the accident, the news was rife with reports of how little Americans trusted self-driving cars. The literature on these cars is now peppered with sceptics, as it’s becoming increasingly clear that robots can’t mimic everything that humans do.

On November 19, over a year and a half after the accident, the National Transportation Safety Board, an independent agency of the US federal government, released a brief summary of its findings on the crash, in which it mentioned the words ‘automation complacency’ three times. Automation complacency has widely been defined as the poorer detection of system malfunctions under automation when compared with under manual control. 

The Uber at the heart of the crash was completing the second loop of the test route when the crash occurred. The findings of the investigation show that the driver’s eyes were off the road for a prolonged period of time. More precisely, her eyes were on her phone streaming a TV serial. The automated driving system detected the pedestrian 5.6 seconds before the impact and continued to track her till the crash, but did not accurately classify her as a pedestrian or predict her path. By the time it was clear that a crash was certain, the situation exceeded the vehicle’s response mechanism.

The investigations found that Uber had actually removed safety features from the original vehicle, such as forward collision warning and automatic emergency braking, relying on the vehicle operator instead. Had the operator been looking at the road at the time of the crash, she would obviously have braked in time to avoid an accident. Instead, she looked up from her phone and tried steering away from the pedestrian only one second before the crash, by which time it was already too late to avoid an accident.

“The vehicle operator’s prolonged visual distraction, a typical effect of automation complacency, led to her failure to detect the pedestrian in time to avoid the collision. The Uber Advanced Technologies Group did not adequately recognise the risk of automation complacency and develop effective countermeasures to control the risk of vehicle operator disengagement,” said the report. 

While Tesla has promised to roll out fully autonomous vehicles by the end of 2019, one of their cars on autopilot hit a Connecticut state police vehicle in the first week of December. The driver was reportedly checking on his dog at the time of the accident.

Though Tesla says its vehicles have the hardware for fully self-driving cars, currently drivers must have their hands on the wheel. However, they have come up with ways to fool the car into thinking their hands are on the wheel by wedging objects into it, mimicking the pressure of human hands. Pictures of Tesla drivers using water bottles, and in one case an orange, to press down on the wheel are popular on social media. In January 2018, a Tesla in San Francisco came to a halt on the Bay Bridge when its drunk driver did not have his hands on the wheel for a long stretch of time. He was arrested and charged with driving under the influence of alcohol. The police ended their tweet on the incident saying. “Car towed (No it didn’t drive itself to the tow yard).”

MIT robotics professor Rodney Brook wrote an incredibly nuanced essay on the manner in which vehicle drivers interact with each other and with pedestrians on the road, giving signals as they pass by. He wrote of how incredibly hard it was for driverless cars to mimic these social signals.

While the conversation on how safe self-driving cars are largely focuses on humans, the jury is still out on whether driverless cars could save more animals than regular cars. “The main challenge is that nature is unpredictable, and it isn’t clear yet how the rigid calculations of computers will handle the sometimes erratic behavior of animals,” says a piece on Smithsonian.com titled Will Driverless Cars Mean Less Roadkill?

Washington Post recently talked of how a human being driving a car would understand the difference in behaviour of a deer, a moose and a cow that strayed by the side of the road, something a machine may not. The point was to explain the sheer complexity of programming a self-driving car. Reason enough for drivers at the wheel of a self-driving car not to take their eyes off the road, at least in the near future.

Related Stories

No stories found.

X
The New Indian Express
www.newindianexpress.com