Increasingly, legal practitioners are using artificial intelligence (AI) to review documents for relevant information, formulating searches of case law and statutes, as well as document analysis. While using AI for some aspects of legal writing, such as proofreading, error correction and document organization may be acceptable, AI use for legal writing and other purposes may be a violation of the legal ethical rules.
AI is a form of computer use. Both traditional computer use and AI computer use require software controlled by algorithms. Algorithms are problem solving processes memorialized in a set of step-by-step list of instructions telling a computer what to do. The fundamental difference with AI-directed computers and traditional-directed computers is that an AI can change its algorithms (and hence its outputs) based on new inputs, while traditional, algorithm-driven computers are fixed.
Comment 8 to ABA Model Rule 1.1 (adopted by the ABA in 2012) indicates that attorneys must knowledgably evaluate AI use, as form of new technology, just as this comment required a working knowledge of Adobe, Word, and Excel. This comment requires attorneys “To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.” While New Jersey (unlike New York and Pennsylvania) did not graft Comment 8 into existing New Jersey Rule 1.1, it is generally understood that the competent element in New Jersey Rule 1.1 requires a working knowledge of relevant technologies, including AI.
Consequently, existing legal ethics rules allow (and perhaps require) lawyers to use computer and other technology (such as AI) to increase efficiency. Attorney computer use has primarily been for information use and access, as evidenced by the New Jersey Advisory Committee on Professional Ethics Opinion 701—Electronic Storage and Access of Client Files. Generally, attorneys must use computer and other technology for efficient document preparation and distribution. ABA Model Rule 1.1 states that “A lawyer shall provide competent representation to a client. Competent representation requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.”
This opinion recommends that attorneys may use computers so long as the adhere to three tenets. First, the computer use must result in an enforceable obligation to preserve confidentiality and security. Second, the attorney must use available technology to guard against foreseeable attempts to infiltrate data. Third, if the lawyer uses a computer vendor, then there is an enforceable obligation to preserve confidentiality and security, and the vendor should notify the lawyer if served with process for client data must be enacted.
An ethics opinion by the New York State Bar Association Committee on Professional Ethics (Ethics Opinion 842-9.10.2010) says much the same as the Opinion 701. However, it adds a breach investigation element. More specifically, an attorney using a computer vendor must investigate any potential security breaches or lapses by the vendor to ensure client data was not compromised.
AI use by attorneys for legal writing has resulted in legal ethics difficulties. Consider the New York attorney who was sanctioned for using fake ChatGPT cases in a legal brief (see 2023 filings for Mata v. Avianca, 1:22-cv-01461, (S.D.N.Y.)). The court ordered the law firm representing the plaintiff to pay a $5,000 fine for “acts of conscious avoidance and false and misleading statements to the court” when it was discovered that AI generated “bogus judicial decisions with bogus quotes and bogus internal citations,” the result of an AI hallucination (content made up by the AI rather than found by the AI).
While the use of AI for writing may enhance creative analysis and identification of persuasive precedents, such use may also violate legal ethics rules, including the duty of competence, the duty of confidentiality and assisting in the unauthorized practice of law. Depending upon how the AI writing was used and billed, other legal ethics rules may be broken.
For example, if the fees associated with the AI writing were not reasonable (i.e., the billing was based on the average time required to write a brief rather than the time required with AI assistance—N.J. Ct. R. app 3 R. R. 1.5). Another example is if the AI writing was presented as an attorney’s writing (see, N.J. Ct. R. app 3 R. R. 8.4—engage in conduct involving dishonesty, fraud, deceit or misrepresentation).
It should be noted that the use of AI may also result in a crime (misdemeanor) related to the unlawful practice of law (NJ Rev Stat Section 2C:21-22 (2014)). It may also result in difficulties associated with copying from other sources while drafting litigation filings, such specific circumstances are lifting language from source materials without acknowledgement may violate several other legal ethics rules, including those requiring competence and diligence and forbidding frivolous filings.
The technology of AI requires that AI be trained. The training information may be generated by an AI programmer, but is usually drawn from internet connected data bases. These internet accessible information sets (including storage of client data on third party servers) are sometimes known as “the cloud.”
New Jersey has issued an ethics opinions regarding the storage of client data on “the cloud,” specifically, New Jersey Supreme Court Advisory Commission on Professional Ethics, Opinion 701 (2006). This opinion permitted the use of an outside service provider to store client files digitally, provided the attorney exercises reasonable care. This ethics opinion suggested that to meet the standard of reasonable care, attorneys must be knowledgeable about how the provider will handle data entrusted to it, and they must include terms in any agreement with the provider requiring the provider to preserve the confidentiality and security of the data.
Third parties always have had access to confidential client information, including process servers, court personnel, building cleaning companies, summer interns, document processing firms, external copy centers and document delivery services. Existing legal ethics codes require attorneys to ensure the same security obligations as any other third party to whom an attorney entrusts confidential client files.
However, none of these entities use the information as it can be used by an AI for training purposes. Consequently, assurances by third parties to reasonable efforts to protect sensitive client data and ensure that employees will not access confidential information is, in most cases, insufficient (for legal ethical purposes) without additional AI-related specificity.
Jonathan Bick is counsel at Brach Eichler in Roseland, and chairman of the firm’s patent, intellectual property, and information technology group. He is also an adjunct professor at Pace and Rutgers Law Schools.