Artificial Intelligence Cybersecurity Difficulties Require Amending Legal Solutions


Artificial Intelligence Cybersecurity Difficulties Require Amending Legal Solutions

 

December 01, 2023  New Jersey Law Journal

 

By Jonathan Bick Jonathan Bick is counsel at Brach Eichler in Roseland, and chairman of the firm’s patent, intellectual property, and information technology group. He is also an adjunct professor at Pace and Rutgers Law Schools.|

 

 More often than ever, artificial intelligence (AI) is used to generate realistic phishing emails, deploy malware or to create convincing internet content. Additionally, it is likely that AI failures will become more frequent and more severe over time. While existing business and technological options exist to address AI cybersecurity issues and ameliorate the adverse effects of AI cybersecurity difficulties, AI programmer protection requires amending cybersecurity legal defenses.

 

Security practices for guarding against traditional and AI cyberattacks are similar. Technology and business security measures for guarding against traditional and AI cyberattacks include risk assessment, network defenses, data access and password restrictions, back-up systems, encryption, insurance, and employee training. However, legal defenses against AI cyberattacks differs from traditional cyberattacks because of existing AI programmer immunity.

 

Currently, if a court finds that there is something a software programmer did to cause cybersecurity difficulties or reasonably should have done to prevent damage by a program, then the programmer may be liable for damages done by the software. In short, conventionally a measure of cybersecurity may be found by making a programmer liable to a damaged cyberattack entity (or even the threat of such an outcome). This recourse is not available in the event of an AI cyberattack.

 

AI is software. Both AI and traditional software are directed by a set of instructions known as the algorithm. The only difference between traditional software and AI software is that traditional software algorithms are written by programmers and AI software algorithms are written by computers (not people).

 

When computers rather than people are directing cyber software attacks, novel legal defenses arise. These defenses ameliorate or eliminate liability of AI programmers.

 

Traditional cybersecurity bad actors are programmers. Programmers are legal persons, namely people who write the algorithms which direct software to attack and infiltrate digital systems and are motivated by money, politics, or some other malicious intent.

Traditional algorithm programming provides the intent of the programmer. In such cases, courts can identify the programmer’s manipulative intent and hold the programmer liable for the algorithm’s misconduct (see Amanat v. SEC, 269 F. App’x 217 (3d Cir. 2008)).

 

AI cybersecurity bad actors are computers. Computers are not legal entities.

 

From a legal perspective, legal action may only be taken against legal entities. Consequently, traditional legal causes of action against traditional cybersecurity bad actors are ineffective against AI cybersecurity difficulties. For example, a cause of action for breach of an internet sites terms of use is valid against the programmer which wrote the algorithm directing a computer to engage in a cyber bad act (extortion, identity theft, email hacking, digital surveillance, stealing hardware, mobile hacking, and physical security breach). However, if a computer rather than a programmer wrote the same algorithm directing a computer to engage in the same cyber bad act then there would be no legal entity associated with the bad act, hence no bad actor.

 

Since AI programmers train the AI software and, once trained, the computer writes the algorithms which tell the software what to do and when to do it. Thus, there are a variety of defenses that have already been effectively asserted by defendants in generative-AI litigation.

 

Common themes include lack of standing, reliance on the “fair use” doctrine, and the legality of so-called “data scraping.” Recently, the court in Dinerstein v. Google, No. 20-3134 (7th Cir. 2023), affirmed the lack of standing as a basis for the dismissal of breach of privacy claims brought on behalf of a putative class of patients of the University of Chicago Medical Center (UCMC).

 

This decision provided AI programmers complete defense to breach privacy claims due to an AI software program bad acts because the programmers only use personal data to “train” AI models which did not constitute a legally cognizable harm. In short, “training” was not a plausible, concrete injury to establish standing to pursue the programmers.

 

The Dinerstein plaintiffs alleged that UCMC breached its contractual privacy arrangements with its patients and thereby invaded their privacy of patient data and thus violated Illinois’ consumer protection statute by using several years of anonymized patient medical records to train an AI model to anticipate patients’ future healthcare needs. The Dinerstein court dismissed the plaintiff’s claims due to lack of standing and for failure to state a claim, noting that plaintiffs failed to establish damages associated with the disclosure of their anonymized patient data or defendants’ tortious intent.

 

Other plaintiffs have tried to make AI programmers liable for their AI programs by noting that AI training required AI programmers to use copyright protected data. This attempt has generally failed. AI programmers have successfully argued that their work is training and these training practices fall under the “fair use” doctrine because their programming transforms copyright protected content into a product which is different in purpose and character from the protected content (see Kelly v. Arriba Soft, 366 F.3d 811 ( 2002); Authors Guild v. Google, 804 F.3d 202 (2015); and Google v. Oracle Am., 141 S. Ct. 1183 (2021)).

 

In short, training generative AI on copyrighted works is usually fair use because it falls into the category of non-expressive use.

Finally, plaintiffs attempted to make AI programmers liable for their AI program’s bad acts by observing that AI relied on scraping of the internet content to develop training datasets for their products, and said scrapping violated the terms of use of internet sites which did not permit scraping. This tactic also failed.

 

The court in hiQ Labs v. LinkedIn, 938 F.3d 985 (9th Cir. 2019), found that LinkedIn’s arguments based on the Computer Fraud and Abuse Act (CFAA), the Digital Millennium Copyright Act (DMCA), California Penal Code Section 502(c), or the common law of trespass against hiQ did not prevent hiQ Labs from “scraping” publicly available data.

 

It is possible that the legislators will provide a legal solution which directly addresses the fact that AI software is computer directed. As in the case of traditional tort difficulties, liability should be imposed regardless of fault. For example, strict liability is imposed on defendants whose activities are abnormally dangerous and on defendants whose products are defective.

Imposing statutory liability might vary significantly. For instance, some, like Florida, allow for theories of product liability and strict liability. Some states, like Indiana, do not allow actions for breach of implied warranty. And others, like New York, add a “failure to warn” category.

 

It might be argued that making AI programing subject to product liability or strict liability will retard AI development. Said impedance might be ameliorated by implementing a risk-tiering system wherein high-risk AI systems (those which might resulting in significant damage) should be subject to a strict liability regime and all other AI systems would be liable per a presumption of the AI operator’s fault-based liability.