Internet Law as Guide to AI Legislation

 

March 28, 2024

Internet Law as Guide to AI Legislation

While laws and regulations may lag innovation, Internet laws and regulations are likely to be valuable guides to Artificial Intelligence legislation. Due to the similarities of Internet technology and AI technology, Internet law options and implementation strategies are expected to be applicable to AI legislation.

 

 By Jonathan Bick | Bick is counsel at Brach Eichler in Roseland, and chairman of the firm’s patent, intellectual property, and information technology group. He is also an adjunct professor at Pace Law School and Rutgers Law School.

 

While laws and regulations may lag innovation, Internet laws and regulations are likely to be valuable guides to Artificial Intelligence (AI) legislation. Due to the implementation similarities of Internet technology and AI technology, as well as AI technologies dependence on the Internet, Internet law options and implementation strategies are expected to be applicable to AI legislation.

 

The lag between new technology, such as the Internet and AI, an effort to govern such technologies is due in part to the need of innovators to move quickly to stay ahead of the competition. Legislative action addressing such innovation is typically stifled initially due to an attempt to regulate new technologies using legacy laws and regulations and latter due to the time-consuming efforts to define, and then adopt thoughtfully rules for the new technology.

 

Fortunately, for those seeking to use Internet law to guide AI legislation, AI technology was immediately preceded by Internet technology. Consequently, lessons learned from the drafting and implementation of Internet law are relatively fresh and existing legislative bodies have experience drafting them.

 

Equally fortunate for those seeking to use Internet law to guide AI legislation, AI technologies are characteristically dependent on the Internet. The most common uses of AI are for Machine Learning (ML) and Generative AI. ML is a type of AI which focused on building computer systems that learn from data. That data is typically harvested via the Internet. Generative AI generates new content that mimics the data on which it was trained. The generated data is most often distributed via the Internet.

 

The Internet allowed new forms of commercial communication, namely email which is unsolicited and unwanted junk email sent out in bulk to an indiscriminate recipient list SPAM). AI can generate uninvited messages, which are referred to as AI spam. These messages generally are not commercial solicitations, rather e-mail designed to be shared via messaging apps, social media, or email and are intended to resemble human communication.

 

While the CAN-SPAM Act (15 U.S.C. 7701-7713) to address the problem of unwanted commercial electronic mail messages and not AI spam, the CAN-SPAM Act is likely to be a useful guide to addressing AI spam. More specifically, just as individual states attempt to regulate SPAM, resulting in conflicting and contradictory rule and regulations, and thereby failing to achieve their desired objectives, states are attempting to individually regulate AI. These attempts have resulted in conflicting and contradictory rule and regulations, and just as in the case of their attempt to regulate SPAM are failing to achieve their desired objectives.

The Internet lesson learned was that Federal action was required to ameliorate the difficulties and deficiencies associated with the use of state by state regulation of a federal matter. This lesson is applicable to governing AI.

 

Similarly, the need for federal legislation related to Internet regulation and applicable to AI regulation may be found in the Electronic Signatures in Global and National Commerce Act (E-Sign Act—see Public Law 106-229, June 30, 2000), which provides a general rule of validity for electronic records and signatures for transactions in or affecting interstate or foreign commerce. Just as individual state statutes were not easily applicable to interstate Internet contracts, state AI regulations related to AI safety and liability statutes may not be applicable to the interstate AI transactions. Hence, the format for the E-Sign Act might be a useful guide for regulating interstate AI transactions.

 

The Internet just like AI has caused all sorts of intellectual property difficulties. For example, since copyright infringement does not require intent (only copying) (see Feist Publ’ns, Inc. v. Rural Tel. Serv. Co., Inc., 499 U.S. 340, 361 (1991)), and the Internet, which is governed by a protocol which requires copying, initially virtually all Internet server owners were liable for copyright infringement.

More specifically, if an Internet user made or distributed unauthorized copies of copyrighted material, that Internet user was violating a federal law and could face severe civil or criminal penalties.

 

This difficulty was addressed by Digital Millennium Copyright Act (DMCA – Pub. L. 105-304), which amended U.S. copyright law to address important parts of the relationship between copyright and the internet.

 

Most prominently, the DMCA provides some protection to Internet service providers from liability for online infringement if conditions are met and removed the need to copy as an element of copyright infringement. The DMCA added the act of circumvention of technology used to protect copyrighted materials as an act of copyright infringement, in lieu of act of copying.

The Internet lesson learned was that new technology may require defining what new transactions (act) constitute violations of existing statutes. This lesson is applicable to governing AI. In particular, neither state nor federal statutes consider AI transactions as ultra hazardous activities nor impose strict liability for AI products.

 

Redefining liability to include AI has already been considered by the European Commission when it released the proposal for an Artificial Intelligence Liability Directive (“AI Liability Directive”) in 2022. The AI Liability Directive deals with claims for harm caused by AI systems, or the use of AI, adapting non-contractual civil liability rules to AI.

 

The Internet also resulted in disputes which were not reasonably resolved by existing judicial or administrative bodies. One example was disputes related to uniquely Internet elements, namely, domain names. More precisely, a domain name may be identical or similar to a trademark to which a party other than the domain name owner has rights.

 

Since traditional resolution forums, such as the courts or Alternative Dispute Resolution (ADR) did not provide cost effective solutions, a new resolution process limited to domain names was created, explicitly the Uniform Domain-Name Dispute-Resolution Policy (UDRP). UDRP is a process established by the Internet Corporation for Assigned Names and Numbers (ICANN) for the resolution of disputes regarding the registration of internet domain names. The UDRP currently applies to all generic top-level domains and must be used by all domain name registrars.

 

The Internet lesson learned was that new technology may require new resolution forum. AI-related difficulties may also be better addressed by a new AI forum. This is particularly true since AI-related difficulties may be so immaterial for individuals as to constitute a nuisance (thus not warranting litigation) but due its impact on many individuals constitute a harm to the public. Regulating ML use of the Internet by AI developers might justify the creation of a new forum, for example.