AI AND THE LAW

Autonomous vehicle fatalities are only the tip of the iceberg in addressing how the law will approach the development of AI.

The increasing role of AI in the economy and society presents both practical and conceptual challenges for the legal system. Many of the practical challenges stem from the manner in which AI is researched and developed and from the basic problem of controlling the actions of autonomous machines.

The conceptual challenges arise from the difficulties in assigning moral and legal responsibility for harm caused by autonomous machines, and from the puzzle of defining what, exactly, artificial intelligence means. Some of these problems are unique to AI; others are shared with many other postindustrial technologies. Taken together, they suggest that the legal system will struggle to manage the rise of AI and ensure that aggrieved parties receive compensation when an AI system causes harm.

First, let’s handle definitions. “Artificial intelligence” refers to machines that are capable of performing tasks that, if performed by a human, would be said to require intelligence. For the sake of distinguishing between AI as a concept and AI as a tangible technology, I will refer to the term “AI system” as a technology. For AI based on modern digital computing, an AI system includes both hardware and software components. It thus may refer to a robot, a program running on a single computer, a program run on networked computers, or any other set of components that hosts an AI.

It might be difficult for humans to maintain control of machines that are programmed to act with considerable autonomy. There are any number of mechanisms by which a loss of control may occur: a malfunction, such as a corrupted file or physical damage to input equipment; a security breach; the superior response time of computers as compared to humans or flawed programming.

That last one, flawed programming raises the most interesting issues because it creates the possibility that a loss of control might be the direct but unintended consequence of a conscious design choice. Control, once lost, may be difficult to regain if the AI is designed with features that permit it to learn and adapt. These are the characteristics that make AI a potential source of public risk as noted by noted technologists, Elon Musk.

Pei Wang makes an important distinction in his article, “The Risk and Safety of AI, A GENERAL THEORY OF INTELLIGENCE”. As with “autonomous” and “learning,” the term “experience” is not meant to imply consciousness, but rather to serve as a useful shorthand for the actionable data that an AI system gathers regarding its environment and the world in which it exists.

Regardless, we are entering an era where we will rely upon autonomous and learning machines to perform an ever-increasing variety of tasks. At some point, the legal system will have to decide what to do when those machines cause harm and whether direct regulation would be a desirable way to reduce such harm. This suggests that we should examine the benefits and drawbacks of AI regulation sooner rather than later.

--

--

--

Ph.D. Fulbright Scholar. AI/ML, Cybersecurity, Data Analytics & EdTech Content Marketing Mgr./Storyteller/Continuous Learner. Startups, Cisco, U.S. State Dept.

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

🔮 Facebook’s bet on AI and can we tax robots?

Artificial Intelligence is Taking Over Android App Development…

Introducing UpStride’s Open-Source Image-Classification API

Mudley Token: Bids Guarantee AI, #22

A less boring guide to AI (Part 1) — Ready? 🤖

About Positive Psychology and Ethics in AI

Living in the Edge of AI

What Canadians Think About Artificial Intelligence and Implications for Canadian Businesses

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Christopher Nordlinger

Christopher Nordlinger

Ph.D. Fulbright Scholar. AI/ML, Cybersecurity, Data Analytics & EdTech Content Marketing Mgr./Storyteller/Continuous Learner. Startups, Cisco, U.S. State Dept.

More from Medium

PURPLE HIBISCUS: A REVIEW

The Dangers of AI: The Risks, Rewards and What You Need to Know

Uses of Artificial Intelligence in the Medicinal Field

Artificial Intelligence Is Conscious

Artificial Intelligence Is Conscious