Self-driving vehicles have been a staple of the science fiction genre since the late 1800s. By 1939, General Motors was already testing the building blocks of this technology and wowing guests with their progress at the New York World’s Fair. Today, multiple tech companies and motor vehicle manufacturers are racing to release the first mass-produced autonomous vehicle to the public. However, there are many technical and ethical concerns that need to be addressed before self-driving cars can be released to the public.
To no one’s surprise, innovative rideshare giants Lyft and Uber are adopting an “ask for forgiveness, not permission” policy and testing self-driving vehicles across the country. In 2017, Waymo announced a partnership with Lyft with the intention of developing a safe autonomous vehicle that could be used in place of rideshare drivers. At the end of last year, Waymo was testing a semi-finished product in Washington, Texas, Michigan, and Georgia.
Developers have been marketing autonomous vehicles as a means of reducing traffic fatality rates in the United States. But are self-driving cars safe?
The History of Self-Driving Cars
Eight motor vehicle companies reported a total of 49 self-driving-car collisions in 2018. For example, last March, a self-driving Uber vehicle struck and killed a pedestrian as she was crossing the street. Although the safety driver – who was watching television on her phone instead of the road – tried to dodge at the last second, it wasn’t enough to circumvent the collision. The National Transportation Safety Board (NTSB) eventually released a preliminary report about the accident, which detailed how the vehicle’s computer system had confused the victim with an “unrecognized object.”
There was also an incident last July, when an autonomous Google Car was rear-ended because it couldn’t correct a preventable situation. While usually the rear-ending driver is at-fault for an accident, there is a concerning pattern being established with self-driving vehicles. As Jack Stewart, a reporter for Wired, explains, “Combine that with the fact that the computer was in charge in 22 of those 28 rear-end crashes, and you have reason to believe that the AVs are doing something that makes cars behind them more likely to hit them. Maybe that’s driving herkily-jerkily (as we experienced in a Cruise car in San Francisco in November 2017) or stopping for no clear reason (as we experienced in an Uber car in Pittsburgh last year).” In his article, Stewart goes on to explain that self-driving cars aren’t worse than your standard driver, but they do tend to take actions that most humans wouldn’t.
Programmers and researchers are also concerned because the sensors in these vehicles are extremely vulnerable to poor weather, defective traffic signals, and intense sunlight. They also can’t process traffic signs that have been tagged with stickers or graffiti.
These vehicles are also sensitive to hacking, which will inevitably put future passengers at risk. For instance, back in 2015, Fiat Chrysler had to recall over 1 million Jeep Cherokees that were manufactured with autonomous vehicle technology after security experts devised a way to control hundreds of vehicles during a test. According to reports from the event, the two security experts were able to manipulate the vehicle’s brakes, windshield wipers, radio, and air conditioning system – all while the poor safety driver was stuck going 70mph on the highway.
In an ideal world, researchers, manufacturers, programmers and lawmakers would work together to resolve as many safety concerns as possible before releasing a model to the public. Unfortunately, political scrutiny and public concern hasn’t stopped Lyft and Waymo from offering self-driving vehicles on the rideshare company’s app.
The Question of Liability
Lawmakers, including Florida Governor Ron DeSantis, are already preparing for a future where AIs are responsible for operating motor vehicles. Last month, the governor signed a law that allows autonomous vehicles to “operate in this state regardless of whether a human operator is physically present in the vehicle.”
But if the AI is in control, who can be held responsible when a self-driving vehicle harms a motorist or pedestrian? Unfortunately, personal injury cases involving autonomous vehicles embody an untried legal frontier with an alarmingly small body of precedent.
Because claims will be judged on a case-by-case basis, the following parties could be held liable after a collision:
- The safety driver
- The manufacturer
- The programmer
- A third party
- The city
Explore Your Legal Options Today
Call the car accident lawyers at Pratt Clay, LLC if you require legal representation after a motor vehicle collision. Our skilled and experienced legal team can investigate your case, pinpoint the negligent parties, and craft a litigation strategy that aims to maximize your damages. With our guidance, you can recover restitution that reimburses your injury-related debts and safeguards your standard of living.