Fact, Fiction or Privacy Infringement: Artificial Intelligence and Deepfake Liability

 

Most agree that internet deepfake content is widespread and may be used to manipulate the public, attack personal rights, infringe intellectual property and cause personal data difficulties. However, little agreement exists as to who is legally liable for internet AI deepfake content.

 

June 20, 2023 New jersey Law Journal

 

 

By Jonathan Bick Jonathan Bick is counsel at Brach Eichler in Roseland, and chairman of the firm’s patent, intellectual property, and information technology group. He is also an adjunct professor at Pace and Rutgers Law Schools.

 

Artificial intelligence-generated pictures, videos and voices distributed via the internet are called deepfakes. Most agree that internet deepfake (deep learning + fake) content is widespread and may be used to manipulate the public, attack personal rights, infringe intellectual property and cause personal data difficulties. However, little agreement exists as to who is legally liable for internet AI deepfake content.

 

Since 2017, software has been available to combine AI deep learning capabilities and internet content to create hyper-realistic content, which is completely fake, using algorithms which require as little as a single photo of a source or a sound bite. While some use of such AI deepfake software is relatively harmless, such as fake images of a person posing with a celebrity, other AI deepfakes involving pornography, for example, may be defamatory or criminal. This matter is exacerbated by the speed and low cost of internet distribution.

 

More specifically, five separate internet AI deepfake types of liability exist. The first is intellectual property infringement. Intellectual property rights are generally owned by the people who create or use the property. An internet AI deepfake may be used to pose as the person owning said intellectual property, resulting in infringement liability.

 

The second is the use of an internet AI deepfake to exploit images of people (usually for pornographic purposes). This results in websites affecting people unaware of the situation. Internet AI deepfake in this case results in liability associated with invasion of privacy by appropriation of the unauthorized use of another person’s likeness for commercial purposes. In addition to porn, such liability can arise when an internet AI deepfake is used without consent to sell a product.

 

A third liability resulting from internet AI deepfakes is damage to a person’s reputation. Deepfakes are regularly used to spread misinformation resulting in defamation liability.

 

Using internet AI deepfakes to compromise data protection and privacy results in liability for damages resulting from unauthorized disclosure, modification, substitution or use of sensitive data, which is a fourth type of liability. Deepfakes are used to gain access to personal data collected and stored by online businesses, employers and the government. Having one’s identity virtually stolen via an internet AI deepfake increases all the liabilities associated with data breaches.

 

A fifth type of liability associated with internet AI deepfakes is deceptive trade practices and unfair competition. This occurs when people who purchase or use a product or service fall victim to damage and incorrect information. Usually, misinformation and promotional marketing materials are circulated, with a deepfake generated spokesperson.

 

There is no federal law specifically addressing either deepfake generally, or deepfake porn specifically. Consequently, the ability to bring criminal or civil charges against an individual differs between states and certain illegal conduct in one state may not be illegal in another.

 

More specifically, only Virginia, Texas and California have enacted deepfake-related legislation. Virginia’s and most of California’s legislation refer directly to pornographic deepfakes, and Texas’ and some of California’s legislation refer to a specific subset of informational deepfakes. However, internet AI deepfake liability issues may arise due to damage caused by false information.

 

For example, an internet user may rely upon an internet AI deepfake for medical, financial or legal advice from a seemingly credible source such as a well-known person who is allegedly promoting the product. Damage to that user is actionable.

 

As a threshold matter, people harmed by an internet AI deepfake would have to identify the party which was the source of the deepfake. Since legal action may only be taken against legal persons, and AI is not a legal person, an action naming an AI as a party will not prevail.

 

Action might be taken against the person depicted in the internet AI deepfake if this person is involved in the production of the deepfake. However, if the person depicted in the deepfake has nothing to do with the content of the deepfake, then action against the depicted party is unlikely to succeed.

 

In common law jurisdictions, internet AI deepfake victims may initiate an action against the deepfake’s creator under one of the privacy torts, the most applicable of which is the “false light” theory. Such an action is generally premised on precedent wherein the programmers are liable for their programs. However, AI programs, unlike traditional programs, are not based on a programmer’s direct algorithms.

 

More specifically, while both traditional software and AI software contain algorithms (i.e., procedures employed for solving a problem or performing a computation), AI algorithms differ from traditional software algorithms. The fundamental difference between AI and traditional algorithms is that an AI can change its outputs based on new inputs, while a traditional algorithm will always generate the same output for a given input. Consequently, AI programmers, unlike traditional programmers, can accurately argue that they are not responsible for AI-related difficulties.

 

The victim may initiate an action against the internet AI deepfake’s publisher, the person who communicates the deepfake to others under one of the privacy torts, again such as the “false light” theory. This is particularly appropriate when the publisher is the same person as the creator. In this instance, the deepfake must be published (communicated or shared with at least a third person) to be actionable and the plaintiff needs to prove that the deepfake incorrectly represents the plaintiff in a way that would be embarrassing or offensive to the average person and involves “actual malice” requirement.

 

Such internet AI deepfake action in New Jersey may arise from four distinct outcomes associated with the tort of invasion of privacy. More specifically, (a) unreasonable intrusion upon the seclusion of another; (b) appropriation of another’s name or likeness; (c) unreasonable publicity given to one’s private life; and (d) publicity that normally places another in a false light before the public (see, Bisbee v. John C. Conover Agency, 186 N.J. Super. 335 (1982)).

 

If a deepfake is being used to promote a product or service, the victim may invoke the privacy tort of misappropriation or right of publicity (depending upon the jurisdiction). Under this theory, a successful plaintiff can recover any profits made from the commercial use of their image in addition to other statutory and punitive damages. Misappropriation can be combined with false light where relevant.

 

While there is no New Jersey statute that recognizes a right of publicity, for more than 100 years, New Jersey has recognized a common right to prevent the unauthorized, commercial appropriation of their name or likeness (see, Edison v. Edison Polyform Manufacturing, 73 N.J. Eq. 136 (1907)). New Jersey courts have confirmed this right of publicity as a privacy right in McFarland v. Miller, 14 F.3d 912 (1994), and Canessa v. Kislak, 97 N.J. Super. 327 (1967).

 

Additionally, if the internet AI deepfake discloses untrue assertions about a person and those statements demonstrably harm the subject’s reputation, a traditional defamation or libel suit may also prevail. Unlike actions which place people in a false light by associating them with content which is misleading or insinuate falsity, defamation and libel actions are associated with content which is technically false.