The legal quandaries of AI

Facebook
Twitter
WhatsApp
Telegram
Email

It’s an age-old legal precept that holds — at least in principle — in all human societies: If you kill or injure someone, you’re liable.

But the rise of artificial intelligence (AI) poses new quandaries for judicial authorities in this matter. What if a robot causes a serious or fatal accident? A judge can’t order it to pay compensation or put it in prison.

There’s growing discussion of the legal ramifications of AI in judicial and political circles, as well as industrial concerns and insurance implications. Experts agree that within a few decades, the use of self-driving cars, intelligent industrial robots and other autonomously acting machines will be as common as electricity and phones are today.

Practical questions of civil law still dominate, mainly with respect to fully or semi-autonomous cars.

“In order for highly and fully automated driving to be widely accepted by society, the ultimate bearer of liability must always be clarifiable,” says Joachim Mueller, director of property and casualty insurance for Munich-based insurance services provider Allianz Deutschland AG.

Since 2017, Germany’s road traffic act has stipulated what kind of data a car must — and may — store. “What still isn’t regulated, though, is how the lawfully authorised parties can technically and organisationally get access to this data,” Mueller says. “This is a matter of equal interest to law enforcement authorities and insurers.

See also  TVS Programme for Sunday July 4, 2021

“Many questions arise here. Does there have to be a time-consuming read-out of data at the car dealership for all parties concerned? How will it be ensured that incriminating data isn’t destroyed? What happens to the data if a vehicle is scrapped, and who deletes the data if the vehicle is sold?”

Allianz is pushing to keep car manufacturers from having sole control over the data. “In my view, the data should be in the hands of a neutral, independent third party — a data trustee – to enable all authorised parties to get access to them under the same legal conditions,” Mueller says.

The rise of robots has consequences for criminal law too. Legal scholars are already considering far-reaching questions since criminal law is based on the principle of individual guilt. Even the most intelligent machine has neither consciousness nor a conscience, however.

“Directed at individual persons, criminal law has difficulty keeping up with the development of autonomous machines and artificial intelligence,” notes Susanne Beck, a professor of criminal law and philosophy of law at the University of Hanover in Germany.

See also  Mrs Malaysia World promotes sustainable lifestyle, plastic-free practices

“In criminal law, the operator of a machine normally bears responsibility,” she says. “If Google gives you false information and you base a decision on it, you’re responsible.”

The situation is simpler in civil law. “A car owner today is already also liable for accidents in which he or she may not have been involved,” points out Nicolas Woltmann, an assistant at the Research Centre for Robot Law at the University of Wuerzburg in Germany.

“A company, too, can be liable in civil law” — in contrast to criminal law — he says. “Only people can commit crimes, not corporate bodies or machines.”

But would it be appropriate to penalise a person when AI is at the wheel and in control?

“The whole point of these machines is not having to act and decide yourself,” Beck says. “If you still bear full criminal liability as the driver, then you don’t need an autonomous vehicle, because you’ve got to concentrate just as you would if you were driving yourself. I see that as a problem.”

To address the problem, ethical and legal experts have introduced the concept of “meaningful human control.”
“Before a person is held criminally accountable, a close look must be taken at whether he or she exercised any control over the machine,” Beck says.

See also  Thaw of Himalayas set to disrupt Asia's rivers, crops— Study

“To a considerable extent we’ll probably have to forgo judicial decisions on accidents caused by machines in the future,” she predicts, conceding that “it can definitely be unsettling for a society if no sentences are possible.”
Her proposal: “Other solutions will have to be sought, for example victim-offender mediation or solutions in other legal systems — civil law, for instance.”

Most experts aren’t in favour of fundamental changes to criminal law. “Legal scholars have tentatively begun to discuss whether we have to abandon the principle that only people can behave culpably,” Woltmann says. “Our prevailing opinion is that at the present stage of technological development, there’s no need to discard the established legal norm.”

But there’s indeed “a grey area where we can’t forecast how jurisprudence will develop in the future,” he adds. “For society at large, a grey area may be acceptable if the number of [traffic] accidents falls as self-driving cars become more prevalent owing to their overall benefits for society.” – dpa

 

Download from Apple Store or Play Store.