Initiating Artificial Intelligence Transactions: A Legal Analysis
February 24, 2026 New York Law Journal
By Jonathan Bick Jonathan Bick is counsel at Brach Eichler in Roseland, and chairman of the firm’s patent, intellectual property, and information technology group. He is also an adjunct professor at Pace and Rutgers Law School.
The legal view of artificial intelligence (AI) transactions is evolving. However, such development does not change the fact that AI transaction legal assessment is best commenced applying the fact that AI is software to the evaluation. Doing so is likely to make such analysis faster, more accurate and more robust.
The fact that AI is software, should automatically trigger consideration of the additional fact that the only difference between AI software and traditional software is who writes the algorithm (instructions telling the computer what to do). AI software algorithms unlike traditional software algorithms are not written by people.
Understanding the “not written by people” feature of AI software is the best lens for viewing AI transactions. The fact that an AI is not a legal entity, prompts the proper legal analysis of AI transactions. More precisely, since an AI is not a legal entity, it cannot be a party to a legal proceeding, thus a party other than the AI must be identified as a potential defendant.
AI creates a wide range of band acts causing injuries that are already occurring today. These harms range from immediate, tangible issues like discrimination and job displacement to damage to the health and safety of people due to AI directions to machines through the Internet of things.
Thus, when an AI causes legal difficulties, but cannot be held liable, proper AI transaction legal analysis should cause a shift in liability consideration to others if they knew or should have known of the risks associated with the AI transaction. Liability for AI “bad acts” may rests with developers, deployers, or users, (who are legal entities) rather than the AI system itself, which lacks legal personhood (and is not a legal entity).
The most likely entities most likely to be liable for AI related harm are entities related to the AI transaction which caused harm. The most common activities include: actions during AI programing such as for flawed algorithms, training data, or reckless design; events during AI operation such as misuse the AI, fail to supervise, or ignore instructions; and arrangements during use in commerce such as liable due to product liability for defective AI.
Under U.S. law, secondary liability for wrongful or harmful acts committed by artificial intelligence (AI) is not explicitly addressed in federal statutes, but principles of secondary liability derived from common law and statutory frameworks provide guidance. These principles, as applied to AI users, depend on the specific context of the harm and the user’s role in the AI’s operation or deployment.
Similarly, the Third Restatement (of Torts point out the concept of “reasonable foreseeability” in product liability cases, focusing on whether harm caused by a product defect was reasonably foreseeable to the manufacturer. This notion guarantees that liability attaches only when the harm falls within the scope of risks that the defendant could reasonably anticipate.
Like traditional software transactions liability, AI transaction liability is also dependent on the role of the AI software involved in the bad act both from a foreseeability standpoint and from a materiality perspective. Courts may limit liability where the harm is deemed too remote or speculative, as well as not being a significant factor in the harm.
However, unlike traditional software transactions liability associated with the algorithms embedded in software a court cannot hold an AI liable. A court can hold human programmers and software developers liable for harm caused by algorithms when it can be proven that the harm resulted from direct intent or from negligence in the design, development, testing, or maintenance of the software, or when the software is deemed defective under strict liability standards. While a court can hold traditional programs liable when a developer breaches a duty of care, such as failing to follow industry standards or ignoring foreseeable risks, a court may not do the same for an AI.
Courts and statutes assess whether the harm caused by AI was a reasonably foreseeable consequence of the defendant’s actions or omissions. Such assessment may be guided by a myriad of software stere decisis findings. The application of prior software legal decisions is likely to be applied across jurisdictions to establish duty, causation, and the scope of liability, ensuring that liability is appropriately limited to harms that could have been anticipated by a reasonable person or entity in the defendant’s position.
Additionally, the legal view of AI transactions must be examined through a lens of federal law and general principles of secondary liability. Such an examination is focused by different types of legal transactions and AI use.
Secondary liability in the context of copyright law, for example, includes contributory and vicarious liability. Contributory liability arises when a party has knowledge of infringing activity and induces, causes, or materially contributes to it. Vicarious liability, on the other hand, does not require knowledge but imposes liability when a party has the right and ability to supervise the infringing activity and derives a direct financial benefit from it .
The court in MGM Studios Inc. v. Grokster, Ltd., 545 U.S. 913 disclose guideline for secondary and contributory liability for Internet software transactions. These principles, while developed in the Internet context, for copyright matters, could analogously be apply to AI users if their actions meet the requisite elements of knowledge, inducement, or control.
The fact that AI is software, is also useful for AI transaction analysis because it may arguably allow the application of certain Internet software transaction statutes and cases to be applied to AI software transactions. More specifically, AI users may be viewed in a manner similar to a group of Internet users, namely, Internet service providers. The Communications Decency Act (CDA) § 230: Under , 47 USCS § 230 offers providers or users of interactive computer services immune from liability for content created by third parties.
Arguably, since AI transaction are heavy reliant on the Internet during development and use AI users may be immune from certain liabilities. For the same reasons, if an AI user is deemed to have acted as an “information content provider” by materially contributing to the development of harmful content, they may lose this immunity.
Finally, the fact that AI is software is useful for AI transaction analysis from a state law perspective. In the past states have enacted special laws with software, such as the California spam law (see for example Business and Professions Code § 17529.5 and SB 186 2003 Cal. Statutes. Chapter 487). Now California enacted Cal Civ Code § 1714.46. :which explicitly addresses AI software transaction related harm, stating that a defendant cannot avoid liability by claiming that the AI autonomously caused the harm. However, defendants may still present affirmative defenses, including evidence of causation or foreseeability . In short, while AI not being a legal entity may not be held liable for a bad act which cause harm, an AI user may be held liable if their use of AI foreseeably contributes to harm.