Using Private AI Tools Ameliorates Legal Ethical Difficulties

Published on

January 14, 2026 New York Law Journal

 By Jonathan Bick    Jonathan Bick is counsel at Brach Eichler in Roseland, and chairman of the firm’s patent, intellectual property, and information technology group. He is also an adjunct professor at Pace and Rutgers Law School.

Artificial Intelligence (AI) is rapidly transforming the legal profession. The ethical use of AI should be a prerequisite for the integration of AI into a legal practice. Failure to learn and implement transparency, accountability, and best practices for responsible AI usage prior to employing AI will likely result in ethical and malpractice difficulties. The use of private rather than public AI tools is likely to significantly improve compliance with Professional Conduct Rules, particularly ethics rules 1.1 (competence), 1.6 (confidentiality) and 4.1 (truthfulness).

State bars are advising lawyers to use AI cautiously because while AI has changed the practice of law, AI has not changed a lawyer’s ethical obligations who uses AI. Practitioners must uphold the requirements of ethical rules and responsibilities while using AI systems to augment their work.

Clearly, said State bars’ admonition has not been fully implemented. Consider, Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y. June 22, 2023) where attorneys filed a brief that included non-existent judicial opinions with fake quotations and fake citations created by a Generative Artificial Intelligence tool. Also, take into Lacy v. State Farm, No. 24-cv-5205 (C.D. Cal. May 6, 2025) in which attorneys at two firms (including a national firm) filed briefs replete with fake citations, and failed to catch many of them even after being alerted by court.

While the term AI encompasses various pieces of technology, it is generally applied to a tool that mimics humanlike behavior. AI has two distinct elements. The first is Machine Learning (ML), which is software that generates algorithms to interpret new data without relying on rules-based programming. The ML algorithms are used by the second AI element namely Generative AI to produce content in response to prompts.

The first (and perhaps the most important) consideration for attorneys prior to using AI is that not All AI Tools are the same. More specifically, public generative AI tools are not secure. Any information, data, or prompts a user enters into a public generative AI tool may be used by the provider to further train and improve the underlying model.

Use of public generative AI tools implicate at least two related confidentiality issues, namely, the possibility that data provided to a public generative AI tools tool as part of the query render that data visible to third parties and said queries may be used in future responses to other users of the tool. In short, this means sensitive or confidential information communicated to a public generative AI tool, is tantamount to sharing it with a third party.

From an ethics perspective, ABA Model Rule 1.6 titled “Confidentiality of Information” effectively prohibits attorneys from using public generative AI tools by explicitly requiring lawyers to keep client information confidential unless given explicit permission to release it. Furthermore, this Rule admonishes attorneys to ensure that confidential client information will not mistakenly be released or obtained by a third party. Rule 1.6 mandates that “[a] lawyer shall not reveal information relating to the representation of a client unless the client gives informed consent [or] the disclosure is impliedly authorized in order to carry out the representation.” It additionally maintains that “[a] lawyer shall make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.”

Thus, they are generally not appropriate for client work. In short, to avoid ethical and malpractice difficulties do not use public generative AI tools.

Private generative AI tools are more secure, but such tools have the same protections and capabilities. Both Lexis and Westlaw are developing and marketing private generative AI tool products designed for lawyers. Some large law firms are programing private generative AI tool products for their exclusive (in-house) use.

Private generative AI tools are more likely to ameliorate Model Rule 1.1 “competency ethical legal difficulties. For example, Thomson Reuters says: “AI-Assisted Research employs Retrieval Augmented Generation (RAG) to prevent the generative AI from making up things like case names and citations by requiring the generative AI to cross reference public published sources and thereby verifying content.

Verification of content prevents AI hallucinations. AI hallucinations arise when generative AI tools produce false, nonsensical, or fabricated information (primarily facts or citations), which results from the generative AI’s algorithm due to sub-optimal ML training.

Under ABA Model Rule 1.1, “[a] lawyer shall provide competent representation to a client.” According to Model Rule 1.1 “Competent representation requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.” Furthermore, the commentary to Rule 1.1 stresses that lawyers must use thoroughness and preparation that are proportional to the task at hand.

For example, comment 5 states that competent handling of a particular matter includes inquiry into and analysis of the factual and legal elements of the problem, and use of methods and procedures meeting the standards of competent practitioners… The required attention and preparation are determined in part by what is at stake; major litigation and complex transactions that ordinarily require more extensive treatment than matters of lesser complexity and consequence.

Private generative AI tools which integrate a verification element will also help avoiding ABA Rule 4.1 violations. This Rule states that in the course of representing a client a lawyer shall not knowingly: (a) make a false statement of material fact or law to a third person.

The definition of “knowingly” is disclosed in Model Rule 1.0(f); ABA Formal Opinion 491, more specifically. This rule provides the baseline: “Knowingly,” “known,” or “knows” denotes actual knowledge of the fact in question, American Bar Association as defined in the ABA Model Rules. Willful Blindness, such as ignoring suspicious facts that would lead a reasonable lawyer to inquire further constitutes “willful blindness” and is punishable.

The Mata case noted above likely discloses a violation of Rule 4.1. More specifically, it involved the filing of a brief that included non-existent judicial opinions with fake quotations and fake citations created by a Generative Artificial Intelligence tool.

In sum, private generative AI tool products which are specifically designed to verify citations can ameliorate ethical and malpractice difficulties. Doing so eliminates or at least reduces AI hallucinations and thereby improves compliance with Model Rule 1.1, Model Rule 1.6 and Model Rule 4.1.