Criminal law is intended to protect the public (and the state) from wrongdoing by punishing wrongdoers, usually with imprisonment or fines, as a deterrent to those who attempt such acts in the future. . Civil actions for negligence not only provide a similar deterrent function, but also a remedy for individuals (or their families) against wrongdoers, “restorative justice.”
Money can’t replace humans, but monetary rewards do provide some solace. And while knowledgeable lawyers don’t like to discuss it, there is an element of revenge, and this has a role. Negligence also sets a standard of care for acceptable conduct in today’s society. So did walker Elaine Hertzberg’s family get what they deserved? Have any social messages been sent to drivers of AI-powered vehicles about what is acceptable behavior? Do you know what standard of care counts as responsible behavior?
humans vs algorithms
Everyone turned to the driver, Rafaela Vazquez, and blinked at the involvement of Uber, which created and ran the driverless car program. First, the county attorney acquitted Uber of criminal misconduct and gave Uber the go-ahead to continue its testing program, which it did immediately (before closing the department). Uber then quickly and quietly settled a civil lawsuit for an undisclosed amount, sweeping his social message off the public radar.
Rafaela Vazquez’s plea deal, as her lawyers had planned, would have her avoiding Uber liability and exposing failures identified by the National Transportation Safety Board. And there were many flaws.
- The technology failed to identify Hertzberg as a pedestrian and therefore could not apply the brakes.
- Uber’s Poor Safety Culture
- Uber has eased the requirement to have two test pilots in each vehicle (which has made drivers more vigilant and allows them to comply with the cell phone ban policy).
- Lone drivers were assigned the same monotonous route in shifts spanning several hours.
The NTSB reports that driver distraction was likely the cause of the accident.Vasquez is also said to have been watching The Voice on her cell phone against company policy.  Vazquez claims he monitored the company’s systems with his mobile phone as requested. Confusingly, even this surveillance was a reckless act, she admitted in her plea bargain. It seems to me that when she willingly defends Uber and pleads guilty to reckless endangerment, she somehow accepts personal responsibility for the company’s bad policies. The crime is defined as “recklessly endangering another person with grave danger of imminent death or bodily injury.” Her story is to be commended, but it is a distraction mandated by company policy.
The plaintiff here may or may not have been the most sympathetic of the plaintiffs. Autopsy reports show blood levels of controlled or illegal substances. A toxicology report found methamphetamine and marijuana. These findings would reduce damages by the amount the jury specified as contributing to her negligence. According to reports, her victim was homeless but had just gotten off her street life, and her friends described her as someone who cared about those around her. She was reportedly known as “El” or “Ms El” in the homeless community. Mr. El. A seasoned lawyer could have brought the Hertzbergs a big prize and a powerful message to society. Alternatively, such a person may not be the person prosecutors want to focus on.
Much of the discussion on this issue centers around the argument that AI-powered vehicles are, by mere numbers, safer than human-powered ones. Tech enthusiasts point out that human driving errors kill more than 40,000 people a year, far from AI’s record. In contrast, government officials say the technology is neither ready for human consumption nor safe. Therefore, the discussion is framed as imperfect technology, but safer than humans and better than the status quo.
Calculation of preventive measures
These arguments miss the point, at least under the basic negligence theory. Once Uber took on the role of voluntarily providing self-driving, it became his duty for Uber to provide the safest form. pretty cautious A person or company will suggest it. It doesn’t matter if the underlying technology is safer than “normal” humans. If technology can be made more secure and without undue hardship to do so, then, according to the famous formula of the Judge Learned Hand, it is their duty to do so.  This is standard consideration expected of any normal sensible individual or business. If you act wisely and cause an accident, you will be held responsible.
As my friend and colleague Chuck Dinerstein has written here and here, looking at airline pilot training, I can give some examples of what should reasonably be expected. increase. Upgraded training and regular certification, standard for similar technology used in aircraft, should be essential. This should and could be enacted by the government. Without law, a private negligence action would have a similar outcome, with the plaintiff appointed as a “private attorney general.” Angry juries award big prizes, topped with punitive damages, and the sums often reach the financial stratosphere. This is a powerful deterrent to corporate misconduct if the prize is sufficiently large.
Perhaps the most significant safety gap was Uber’s failure to address drivers’ “automation complacency,” a tendency to place less emphasis on automated processes that require little human input. Superimposing this well-known phenomenon on “highway hypnosis” creates a potentially explosively dangerous situation.that is foreseeable The death of an Uber driver puts the responsibility of prevention on Uber. Even the occasional chime or the seat programmed to vibrate at random intervals might have left the driver reeling from satisfaction. Having them call the central office and talk to the monitor might have helped.
The resolution of the civil issue, perhaps more than the prosecutor’s indifference, has hampered significant research into what this new technological age should require.
“We don’t want the story of the first fatal self-driving car to be a lie, or it’s going to be a fight. We need answers.”
– Brian Walker Smit, Carolina Law Professor, USA.
There is much to be learned from this scenario. AI technology permeates every aspect of our lives. In a recent session on ChatGPT at the Conference of Law Professors, a majority of law writing lecturers and law librarians advocated teaching law students to incorporate ChatGPT into their regular legal studies, and rightly so: I pointed out the tendency of ‘Law-Bot’ to ‘hallucinate’. ” For example, to compose a citation. “Hallucinate” is a charitable designation imposed on bots. If it was done by a human, it would be called fraud or misrepresentation.
Students will be taught to validate Law-Bot behavior, but no one mentioned the tendency of students to fall into “automation complacency.” This is certainly to be expected in any profession that introduces automation into its field. The complacency of automation (reduction of human involvement, concern, and concern) is a pain that we need to be mindful of and take steps to prevent, not only in driving, but in any effort we share with algorithms.
 she was allowed listen to the program.
 This formula, endorsed in the United States v. Carroll Trucking case, basically states that if the burden of taking precautions is less than the probability of harm multiplied by the potential magnitude of harm, an individual or entity may said there was a mistake. In other words, if the cost of preventing harm is less than the risk of harm multiplied by its potential severity, reasonable precautions must be taken to avoid being considered negligent.