April 23, 2025|New Jersey Law Journal
Addressing Artificial Intelligence Agent Legal Difficulties
AI agent transactions in unsupervised settings, with no human mediators, have resulted in legal difficulties. Fortunately, existing legal theory and case law are usually sufficient to address said difficulties.
Addressing Artificial Intelligence Agent Legal Difficulties
By Jonathan Bick Jonathan Bick is counsel at Brach Eichler in Roseland, and chairman of the firm’s patent, intellectual property, and information technology group. He is also an adjunct professor at Pace and Rutgers law schools.
Social and economic Internet communications facilitated by artificial intelligence (AI) are becoming increasingly a feature of personal and business transactions. A new category of agency has sprung into existence, namely the AI agent. As in the case of traditional agency transactions, AI agent transactions in unsupervised settings, with no human mediators, have resulted in legal difficulties. Fortunately, existing legal theory and case law are usually sufficient to address said difficulties.
AI agents are a type of artificial intelligence software that are configured to understand and respond to customer inquiries without human intervention. Since AI software is a form of software, traditional agency case law and statutes related to software provides a suitable framework in which to find the solution.
More precisely, the legal basis for application of existing agency law to AI agents is a result of that fact that AI agents are accessed and used in the same manner as traditional software. In short, an agency relationship is formed when the AI software is accessed and executed, thereafter regulated under agency law.
Internet users use AI agents to buy and sell goods. For example, agentic AI is an AI system that can act autonomously. More specifically, Agentic AI systems can make decisions, take actions, and achieve goals after merely being prompted to do so. For example, Amazon’s Agentic AI which is an feature in its Amazon Shopping app (“Buy for Me”) that uses agentic AI to help customers buy items from other brands while remaining within Amazon’s app.
AI agents are used to determine eligibility for legal entitlements, such as healthcare benefits. The healthcare industry, including UnitedHealth and CVS/Aetna use agentic AI to automate tasks and thereby simultaneously streamline processes reducing the need for and cost of healthcare professionals.
Such AI agent use results in traditional agency law issues, such as: What is the standing of these transactions in existing legal framework? What is the legal status of these transactions? And who is responsible for the bad acts resulting from these transactions to name a few.
An agent (AI or traditional) is something able to take actions. Agents may be distinguished from other entities (legal or otherwise) in that they do things as opposed to have things happen to them, thus AI agents are agents. Like Internet browsers, AI agents upon receiving prompts from users take actions whether directing us to an Internet site or engaging in an e-commerce transaction. Therefore, the agency status/legal relationship of AI agents arises from the simple fact that the AI agents act of taking action on the AI agent’s user’s behalf.
From a legal perspective, an agent is someone or something that acts on behalf of another who has prompted said act. “Agency” is a legal relationship in which one entity engages another to perform a service under circumstances that involve delegating some discretion over a service which requires some choices.
AI is software and so are AI agents. Non-AI software has been found to be an agent under both contract and tort law. More specifically, courts applied traditional tests and reasoning to cases in which persons modify software to bypass data entry edit checks, or to distribute obscene materially electronically (See for example, Shea v. Reno, 930 F. Supp. 916 (1996)).
Contract and tort law are as applicable to electronic tasks as to traditional tasks, and using an AI software as an agent is just like using any tool or entity with predictable behavior to accomplish a task. As technology changed, traditional law incorporated the new technology into an existing legal framework. Consider for example, contract law. It had well established rules related to the offer and acceptance (Restatement (Second) of Contracts §§ 24, 50 (1981) prior to the introduction of the fax machine. After the facsimile emerged as a new tool for message delivery, contract law applied the pre-exiting legal framework to incorporate situations involving the use of a fax machine (Establishment Asamar Ltd. v. Lone Eagle Shipping Ltd., 882 F. Supp. 1409 (1995)).
While some may argue that AI software differs from traditional software in a single significant respect, namely how the algorithm is created (i.e. how the series of steps that a computer executes to produce a desired output are prepared). The consequence is simply that traditional computer software algorithms are written by programmers (who are legal entities) and AI computer software algorithms are written by computers (which are not legal entities).
However, this distinction is not relevant for the application of agency law, because agency law provides an adequate framework to surmount this difference. Like any software application (traditional or AI), the software must be activated (or accessed in the case of the Internet) for some purpose. AI software, like traditional software, will then be used to accomplish specific tasks for the licensee. Consequently, the software agent ( traditional or AI) is the legal role of the “agent,” and the software user is in the legal role of the “principal” (Restatement of (Third) Agency § 1 (2006)).
This relationship of agent/principal has been formed whether or not the parties themselves intended to create an agency. Therefore, AI agents are legally considered agents within an agency relationship. Consequently, liability can be attributed to the actions of said AI agents, binding the AI agent user to appropriate legal duties.
At least four theories exist for giving traditional software the status of agent from a legal perspective. Additionally, Courts have regularly considered the role of software in a transaction when attributing liability. Other theories comparing software agencies to agency rights and duties arising from the agency activities associated with slaves, and corporations, each of which may be used to support applying traditional agency law to AI agents.
The Court in McEvens v. Citibank, 408 NYS 2d 870 (NY County Civ. Ct. 1978) found a . different standard of responsibility when the action is done by an unattended machine as opposed to the same act done by a human. In this case, the Court held a bank responsible for money allegedly lost during an automatic teller transaction. The Court placed a greater burden on the bank when a software agent was used because the bank authorized a software agent to do human teller transactions without the same level of intelligence as a human agent (the teller).
In short, Courts are willing to allow software (tradition and AI ) to be agents in ordinary consumer transactions. Extending this allowance by the Courts to treatment of transactions conducted by autonomous software will result in the application of liability theories available under contract (where the actions of an agent bind the principal to third parties) or tort (where the principal may be vicariously liable for the actions of the agent). Courts are willing and able to treat AI software as “legal agents” and apply agency law. Once AI software agents are viewed as having legal status with formation of the agency relationship, it follows that liability can be attributed to the actions of said AI software agents and the AI software users will consequently be held responsible under existing agency law.