Assessing Liability in Artificial Intelligence Litigation


Assessing Liability in Artificial Intelligence Litigation

 

New Jersey Law Journal         May 22, 2023

 

By Jonathan Bick | Jonathan Bick is counsel at Brach Eichler in Roseland, and chairman of the firm’s patent, intellectual property, and information technology group. He is also an adjunct professor at Pace and Rutgers Law Schools.

 

Artificial intelligence (AI) applications have proliferated on the internet. Increasingly, AI applications result in legal difficulties primarily associated with privacy, discrimination, product liability and negligence. Addressing these difficulties usually begins with a legal determination of which party may sue or be sued. Since an AI is not a legal person and internet transactions may span more than one jurisdiction, this initial assessment is non-trivial. However, an understanding of AI protocols may make such an assessment feasible.

 

Since legal difficulties are most often characterized by matters which may be resolved in court, addressing AI internet legal difficulties may best be addressed by first determining which entities associated with an internet transaction may be sued and by whom. Such a determination is a function of specific jurisdictional statutory language.

 

While an internet transaction may span more than one jurisdiction, one jurisdiction has preemptory authority to adjudicate cases and issue orders (see, e.g., Ruhrgas AG v. Marathon Oil, 526 U.S. 574 (1999)). The growing set of judicial precedents by American courts have based personal jurisdiction upon defendants’ internet activities and has made this task progressively more difficult.

 

 

Personal jurisdiction is premised on the notion that a defendant should not be subject to the decisions of a foreign or out-of-state court, without having “purposely availed” himself of the benefits that the forum state has to offer. More specifically, when an internet user is aware of intentionally using a server to facilitate an internet transaction outside their jurisdiction, such awareness resulting in sufficient purposeful availment creates personal jurisdiction (see, e.g., J. McIntyre Machinery v. Nicastro, 564 U.S. 873 (2011), and Calder v. Jones, 465 U.S. 783 (1984)).

 

To make this jurisdictional determination, a court first must decide “where” internet conduct takes place, and then what it means for internet activity to have an “effect” within an area. More specifically, as disclosed in the American Law Institute’s Restatement (Second) of Conflict of Laws 37 (1971), “A state has power to exercise judicial jurisdiction over an individual who causes effects in the state by an act done elsewhere with respect to any cause of action arising from these effects unless the nature of the effects and of the individual’s relationship to the state make the exercise of such jurisdiction unreasonable.”

 

For the purpose of assessing “unreasonableness” for internet transactions, a court must consider the conflicts of law. More specifically, a court must then determine if it has authority to resolve the conflicts of law arising when multiple geographical statutes are inconsistent with each other.

 

 

Courts may only exercise jurisdiction over “legal persons.” While nothing stands in the way of granting legal person status to an AI since legal person status is only a concept that designates the ability of an entity to have rights and obligations, no United States jurisdiction has done so. Consequently, once a jurisdiction is selected for an internet AI legal difficulty, then an assessment is required to determine if an entity which is responsible for an AI’s production and distribution is legally liable for the damage caused by that AI.

 

A review of the appropriate jurisdictional statutes addressing direct and indirect legal liability is required to determine if an entity which is responsible for an AI’s production and distribution is legally liable for the damage cause by that AI. Such an assessment should also consider the AI as an agent of a legal person. Understanding what an AI is and how it works is essential to a proper assessment.

 

While AI lacks a uniform or universal definition, it is usually thought of as a computer program which enables problem-solving. More specifically, AI is an expert system which makes predictions or classifications based on input data.

  

AI is embodied as a sequence of instructions (protocols) telling a computer to transform input data into a desired output. AI differs from traditional computer software algorithms due to a feature that allows AI computer software to re-write its own code independently of code previously defined by the programmer. Unlike traditional computer software algorithms which limit the code re-write to criteria and biases previously coded by the programmer, AI software code re-writes are experience driven and hence free of programmer criteria and biases.

 

Proper AI protocol results in output, including internet activity, which is independent of the programmer. Consequently, the output of an AI may not necessarily be attributed to the programmer who writes the AI protocol. For example, work product created by an AI does not vest in the AI protocol program because the AI created the work free of programmer criteria and biases (see Section 102 of the Copyright Act, which states that the work initially vests in the creator of the work).

 

The definition of a legal person allows for the attribution of legal liabilities to artificial persons. One such example of a legal person that is not human is the artificial person known as a corporation. Corporations are independent of their shareholders and employees and may be liable for legal difficulties resulting from owning property and entering into contracts.

 

Furthermore, artificial persons as legal persons may be granted legal rights. Corporations, for example, are for considered legal persons for the sake of the 14th Amendment, providing them with the protection of the Due Process Clause (see First National Bank of Boston v. Bellotti, 435 U.S. 765 (1978)).

 

Artificial persons do not necessarily enjoy all the rights of people. For example, corporations are not citizens because they are not natural persons, which means that they do not receive protection under the Privileges and Immunities Clause.

 

Since internet transactions may extend beyond the United States, it should be noted that some countries treat corporations as real legal persons with rights similar to humans. These countries include but are not limited to China, France, Germany and Spain. Other countries, such as Italy, treat labor unions as people for legal purposes.

 

Once a court has acknowledged jurisdiction, statutes in that jurisdiction will determine if the AI is a legal entity for the purpose of a litigation and hence liable for a bad act. If the AI is fond not to be a legal entity, hence not sui juris, at least one additional party must be identified in order to allow litigation to proceed.

 

AI is normally embodied in a program. In the past, programmers and entities which distribute programs have been found liable for harm caused by their programs. These parties may be liable for AI-related legal difficulties. Alternatively, if such parties are not identifiable in a timely manner, then a “John and Jane Doe” action might be considered in an effort to identify the unknown bad actors.

 

In sum, artificial intelligence applications result in legal difficulties primarily associated with privacy, discrimination, product liability and negligence. Addressing these difficulties usually begins with a determination of which party may sue or be sued. Since an AI is not a legal person and internet transactions may span more than one jurisdiction, this initial assessment is crucial.