Addressing AI Legal Difficulties With Existing Legal Options


Addressing AI Legal Difficulties With Existing Legal Options

 

By Jonathan Bick | February 27, 2024    New York Law Journal

 

Jonathan Bick is counsel at Brach Eichler in Roseland, and chairman of the firm’s patent, intellectual property, and information technology group. He is also an adjunct professor at Rutgers Law School and Pace Law School.

 

It is often stated that the law lags behind technology. In the case of AI related difficulties, however, the damages related to the use of AI is not so revolutionary that all existing legal precepts must be abandoned and replaced with new legal principles or

require completely new laws.

 

Copyright

 

Consider some typical examples of AI related legal difficulties. First, there is AI-related copyright difficulties such as found in the recent case, Andersen v. Stability AI Ltd., 23-cv-00201-WHO (2023), which related to direct copyright infringement, vicarious copyright infringement and violation of the Digital Millennium Copyright Act (DMCA). In Andersen, a group of artists sued the creator of an AI software program for copying copyrighted images without permission and using the images to train an AI.

The court found direct infringement based on the fact that the artists’ copyrighted works were used to train Stable Diffusion, and then stored or incorporated the training images derived from their works into Stable Diffusion. Other courts have also resolved AI copyright infringement matters. Consider Thomson Reuters Enterprise Centre GmbH et al v. ROSS Intelligence Inc., No. 1:2020cv00613 – Document 170 (D. Del. 2022) for example.

 

AI Malfunction

 

Next, consider legal difficulties due to AI malfunction which resulted in harm. For example in Robert Mata v. Avianca Inc. Case 1:22-cv-01461-PKC (2023) an AI’s output was incorrect which resulted in damages payable by the user. In this case, an attorney filed court documents with erroneous content generated by an AI. The incorrect content generated resulted in sanctions for citations of non-existent cases in the plaintiff’s motion to dismiss the opposition brief; and the submission of non-existent judicial opinions.

 

Many other examples of AI related legal difficulties have been successfully addressed by courts using existing legal options. For example, AI cases involving invasion of privacy and property rights; patent, trademark, and copyright infringement; libel and defamation; and violations of state consumer protection laws, among others have already been successfully addressed by the courts.

 

Each of the current AI cases have a common thread, namely when an AI caused harm, non-AI entities have been found to be responsible. This finding results from the fact that existing legal solutions are suitable for resolving AI related legal difficulties.

Since AI’s are capable of causing harm but cannot be a legal entity, they are not held accountable by court action. Several current and future possibilities exist to resolve these AI difficulties. Current options involve identifying indirect liability (i.e. third parties responsibility for AI bad acts). These might include negligence per se, respondeat superior, vicarious liability, strict liability, or intentional conduct. Future options include but are not limited to: change the law to make an AI a legal person (i.e. such as a corporation) and/or changing the law to make AI programing an ultra-hazardous activity (i.e. imposing strict liability).

 

Legal Action

 

Regarding current legal action, when pursuing a legal claim against a party which is not sui juris (such as an AI because it is not a legal person and hence is not able to make contracts and sue others, or be sued), third parties may be required to compensate the victim for financial and non-financial damages. Tort law permits damaged parties to seek indirect liability claims depending on the facts and circumstances of the claim.

 

Programmers who write traditional software algorithms might be liable for the bad acts of an AI. The reasoning behind this is that the AI is designed to accomplish goals specified by and receive directions from a human being. Thus, it has been suggested that either direct or vicarious liability may be applied to hold the human programmer who wrote the software algorithms liable for the damages caused by the AI agent.

 

Regrettably, there is limited capacity of programmers to predict an AI’s conduct, especially if the AI functions as a neural network that can learn patterns of behavior independently. Certainly AI programmers and/or the entities for which they work can be held responsible if they have intentionally created AIs that commit crimes. If they have done this unintentionally, they might benefit from the lack of mens rea.

 

Future AI criminal statutory legislation or common law findings related to AI responsibility might depend upon stare decisis and cite precedent drawn from matters associated with the criminal liability of slave owners in the antebellum U.S. South. Holding the AI proved to be responsible for crimes indemnifies AI programmers from criminal charges.

 

Courts allow indirect liability and thereby transfer the responsibility of an entity who causes the harm to a third party. Also known as secondary liability, this type of liability is often applied in intellectual property cases in which a party facilitates the infringement of another party. Secondary liability can also arise when a supervisory party is responsible for and has control over the actions of its subordinates or associates.

 

Traditional algorithm programming provides evidence of the intent of the programmer. Such intent may be gleaned from the code and the paper trail left behind by the algorithm. See Amanat v. SEC, 269 F. App’x 217 (3d Cir. 2008), where the court identified the programmer’s manipulative intent and held the programmer liable for the algorithm’s misconduct.

 

While both the traditional and AI programming may result in the same algorithm and achieve that same result, the traditional program will know the algorithm before the traditional program is executed while the AI programmer may not. This is an essential factor in determining a programmer’s or developer’s liability.

 

Liability

 

An AI software programmer or developer could be liable for AI-related damages in three separate ways. First, on the basis of individual accountability in the event that the AI was programmed intentionally or recklessly in such a way that it would violate a statute or cause harm to another.

 

Second, an AI software developer could be liable through the doctrine of indirect perpetration. This would bridge the gap in cases where software developers, acting like puppet masters, perpetrate violations of law or harm others through third-party actions.

 

Third, an AI software developer could be held liable if he or she “aids, abets or otherwise assists” in the commission of a statute violation or of harm to another—including providing the means for its commission.

 

Ordinary negligence applies when a software developer does not use the degree of care that a reasonably prudent person would have used when developing software. As found by the court in United States v. Carroll Towing, 159 F.2d 169 (1947), the reasonableness of the defendant’s conduct is frequently understood as comparing or balancing the costs and benefits of a defendant’s actions.

 

If it can be determined that there is something a software developer should have done, and would reasonably have been expected by him and by all others involved in the use and distribution of the software, then he can be found guilty of negligence and required to pay damages to the plaintiff.

 

Negligence claims, for example, may be available in situations where product liability claims may not be available. An example of such a finding was disclosed in Griggs v. BIC, 981 F.2d 1429 (3d Cir. 1992).

 

A computer malpractice cause of action is an option. Malpractice is a failure to employ the higher standard of care that a member of a profession should employ. For example, the court in Data Processing Services v. L.H. Smith Oil, 492 N.E.2d 314 (1986), is one of the few cases in which the court imposed professional liability on computer programmers. However, attempts to impose a professional malpractice standard on the IT industry and create a higher duty of care have usually been unsuccessful. Consider F&M Schaefer v. Electronic Data Systems, 430 F. Supp. 988 (1977) for example.

 

The third type of liability is strict liability. Restatement (Second) of Torts 402A (1965) provides liability to the seller of any product that is deemed unreasonably dangerous. “Manufacturers and sellers of defective products are held strictly liable, (that is, liable without fault) in tort (that is, independent of duties imposed by contract) for physical harms to person or property caused by [a] defect.”

 

Strict liability is usually only applied in extreme cases, where a product defect is obvious. In the case of AI designers or AI programmers who may be considered as rendering professional services, their duty is limited to exercising the skill and knowledge normally possessed by members of that profession or trade. In order to hold them liable for the same type of strict product liability, such parties have to expressly warrant that there are no defects in such services.

 

In short, none of the above options are likely to secure the accountability of AI software distributors for violations of statutes or result in harm to others involving AI. Alternatively, entities which distribute programs have been found liable for harm caused by programs. These parties may be liable for AI-related legal difficulties.

 

In addition to actions against third parties for facilitating AI torts, properly constructed Internet terms of use agreements may allow successful legal actions against third parties for facilitating contract breaches. Terms of use agreements are the same thing as a Terms and Conditions agreements or Terms of Services, each agreement defines rules for the use of a website. Since AI’s are only computer programs, normally a third party must assist an AI program to gain access to an Internet site.

 

The third party’s action in facilitating the AI may result in said third party being bound by the terms of use agreement. Alternatively, the third party may be liable for the AI’s breach of the terms of use agreement due to its legal relationship to the AI (such as a guarantor) or the non-party might have induced the AI (i.e. breaching party) to breach the contract. Note while the AI’s breach of the terms of use agreement is a contract breach, the inducing party to a contract to breach the contract is a tort, making the inducing party liable in damages to the non-breaching party.

 

In addition to general causes of action related to intent and negligence, some intellectual property holder causes of action arise due to technological actions (such as copying) without any intent or negligence. For example, copyright owners may sue a user of the generative AI software for using the generative AI software that has been trained using the copyright owner’s copyrighted data.

 

This litigation risk is highest for users of generative AI software who generate an image as output that is substantially similar to copyrighted works of a particular visual artist or if the output inserts a watermark or other insignia indicating that the model was trained using copyrighted data of the visual artist or image source.