Dear Ethics Committee,
I am considering using artificial intelligence (AI) tools, such as ChatGPT, in my law practice. However, despite how ubiquitous AI seems lately, I am not sure I understand how it works. I have heard using AI may be problematic for a number of reasons. I heard recently that AI tools may be biased, unfair, or may even provide responses that are factually wrong and perpetuate discrimination. Is it permissible for me to use AI given what I have heard and what should I be aware of in terms of these concerns?
Answer: Like many topics related to AI, it is not possible to answer this question with a simple “yes” or “no” response. While it may be permissible for you to use AI, particularly after you have increased your understanding of these tools and can competently implement their usage in your practice, it ultimately depends on your assessment of your competency, your confidence in the tools you are using, and your ability to mitigate any issues related to bias and discrimination.
As part of the Ethics Committee’s efforts to provide the New Hampshire Bar guidance on artificial intelligence, this Ethics Corner is intended to address concerns related to bias and discrimination while using artificial intelligence tools.[i] In this Corner, we will address the importance, under N.H. R. Prof. Conduct 1.1, for attorneys to be technologically aware. This awareness should involve having an understanding of how AI tools, including generative AI, work.
Although concerns regarding discrimination may appear to be related to N.H. R. Prof. Conduct 8.4(g), this Corner does not significantly address this Rule because unintentional, inadvertent, and/or accidental discrimination or bias that may arise out of using AI tools does not satisfy the Rule’s language relating to an attorney’s actions that, subjectively or objectively, have “the primary purpose to embarrass, harass[,] or burden another person, including conduct motivated by animus against the other person based upon the other person’s race, sex, religion, national origin, ethnicity, physical or mental disability, age sexual orientation, marital status[,] or gender identity.” See N.H. R. Prof. Conduct 8.4(g); See also N.H. R. Prof. Conduct 1.0 (f) (denoting that the word “knows” in the Rules means “actual knowledge of the fact in question”). Inherently, by using AI as a tool in their practice, an attorney’s primary purpose is not to discriminate, harass, or embarrass another person, nor can it be said that their use of AI is motivated by animus, so their use of AI does not implicate N.H. R. Prof. Conduct 8.4(g). See also Colin E. Moriarty, The Legal Ethics of Generative AI-Part 3: A Robot May Not Injure A Lawyer or, Through Inaction, Allow A Lawyer to Come to Harm, Colo. Law., 30, 39 (2023) and Hon. John G. Browning, Real World Ethics in an Artificial Intelligence World, 49 N. Ky. L. Rev. 155, 165-171 (2022).
Furthermore, it is important for lawyers to know how the AI tools they use may be impacted by bias or discriminatory sources, to better weigh the risks and benefits associated with using them. Importantly, attorneys must use their own independent judgment at all times in their representation of clients, including when using AI on their clients’ behalf, pursuant to N.H. R. Prof. Conduct 2.1 (mandating that lawyers, while representing a client, “exercise independent professional judgment and render candid advice”).
Discussion:
As we have written about before, under N.H. R. Prof. Conduct 1.1, attorneys have an ethical obligation to have a basic understanding of how such technology works and to be aware of the benefits and drawbacks of using certain types of technology. See Ethics Corner: “Client Confidentiality and Technology,” dated May 20, 2021 and the other AI Ethics Corners. While it is not uncommon in our society for people, including lawyers, to call themselves Luddites or “technologically illiterate,” ignorance of technology is not an excuse nor is it ethically permissible in the legal profession. See, e.g., James v. Nat’l Fin. LLC, Case No. CV-8931-VCL, 2014 WL 6845560, at *12 (Del. Ch. Dec. 5, 2014) (where lead defense counsel announced to the court he failed to comply with a discovery request to produce electronically stored information because he was “not computer literate,” the court sanctioned him, explaining that “[p]rofessed technological incompetence is not an excuse for discovery misconduct” and reminded counsel that, under the Model Rules of Prof. Conduct, cmt. 8, “[d]eliberate ignorance of technology is inexcusable.”) (quoting Judith L. Maute, Facing 21st Century Realities, 32 Miss. C.L. Rev. 345, 369 (2013)); see generally Luddite, Merriam-Webster Dictionary, available at www.merriam-webster.com/dictionary/Luddite (last visited May 16, 2024) (defining broadly the term “Luddite” as someone “who is opposed to especially technological change”). Therefore, here we will provide an overview of the artificial intelligence landscape, and why these tools may use discriminatory or biased inputs, and/or may respond in ways that are biased, unfair, and discriminatory.
AI, LLMs, and ChatGPT
Congress has defined “artificial intelligence” as a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations[,] or decisions influencing real or virtual environments.” National Artificial Intelligence Initiative Act of 2020, § 5002(3). As such, artificial intelligence constitutes a type of “automated system” “that uses computation as a whole or part of a system to determine outcomes, make or aid decisions, inform policy implementation, collect data or observations, or otherwise interact with individuals and communities.” See U.S. Office of Science and Technology Policy, Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, at “Definitions” (Oct. 2022), https://www.whitehouse.gov/ostp/ai-bill-of-rights/.
Large language models (“LLMs”) are a type of artificial intelligence, specifically, a “type of artificial neural network, trained on an enormous amount of text data, which determines the probability of a word sequence.” Amy Winograd, Loose-Lipped Large Language Models Spill Your Secrets: The Privacy Implications of Large Language Models, 36 Harv. J.L. & Tech. 615, 616–17 (2023) (citations omitted). The almost omnipresent ChatGPT is one of the leading LLMs in artificial intelligence today. See id. ChatGPT serves as “an AI model designed to specialize in human-life, long-form conversation” and a “cutting-edge chatbot.” Id. at 616. Essentially, ChatGPT, like other LLMs, works by being given an input, like a question or request, and then providing a predictive response based on the “enormous amount of text data” that it has been trained upon. See id. at 617.
Potential Bias and Discrimination in AI
Despite LLMs impressive ability to provide helpful responses, such “technological progress is accompanied by the risk of wide-ranging social harms,” including bias. Winograd, supra, at 620. For example, in September 2023, a Forbes article identified “several biases” that may “emerge during the training and deployment of generative AI systems,” such as LLMs, including: machine bias (biases that are from the training data used – specifically the “vast human-generated datasets … [which] tend to absorb biases present in the text, perpetuating stereotypes and discrimination[]”), availability bias, confirmation bias, selection bias, group attribution, contextual bias, linguistic bias, anchoring bias, and automation bias (“the tendency of humans to blindly trust AI-generated outputs without critically evaluating them”). See Ken Knapton, “Navigating The Biases in LLM Generative AI: A Guide to Responsible Implementation,” Forbes (Sept. 6, 2023), https://www.forbes.com/sites/forbestechcouncil/2023/09/06/navigating-the-biases-in-llm-generative-ai-a-guide-to-responsible-implementation/?sh=167a41f5cd2c.
Similarly, the Blueprint for an AI Bill of Rights discusses “algorithmic discrimination,” a term that refers to the fact that automated systems may “contribute to unjustified treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law.” See U.S. Office of Science and Technology Policy, Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, at “Definitions” (Oct. 2022), https://www.whitehouse.gov/ostp/ai-bill-of-rights/.
While it may seem like this discussion regarding the potential for bias and discrimination in AI is only theoretical, consider the case of Tay, a chatbot Microsoft released on Twitter (now “X”) in 2016. The essential premise behind Tay, according to Microsoft, was for the chatbot to “engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay[,]the smarter she gets.” Meg Leta Jones, Silencing Bad Bots: Global, Legal and Political Questions for Mean Machine Communication, 23 Comm. L. & Pol’y 159, 162 (2018) (citations omitted). However, after less than twenty-four hours on Twitter, Microsoft removed Tay from the platform due to racist, sexist, and derogatory Tweets, and issued a statement explaining that “[t]he AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We’re making some adjustments to Tay.” James Vincent, Twitter Taught Microsoft’s AI Chatbot to be a Racist Ass[****] in Less than a Day, THE VERGE, Mar. 24, 2016, http://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist.
For an additional example, risk assessment tools used in the criminal justice system, such as the Correctional Offender Management Profiling for Alternative Sanctions (“COMPAS”) algorithm, have been found to be biased. See Brookings Institution, Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms, at “Examples of algorithmic biases” (May 22, 2019), https://www.brookings.edu/articles/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/. COMPAS is used by courts to assess recidivism risk of criminal defendants, particularly when determining pretrial detention or release on bail. Id. According to a report from ProPublica, African-Americans were more likely to be assigned a higher-risk score than whites, which resulted in longer periods of pretrial detention. Id.
There also exists AI-driven jury selection software that uses race, ethnicity, and gender as factors in its analysis of jurors. See Hon. John G. Browning, Real World Ethics in an Artificial Intelligence World, 49 N. Ky. L. Rev. 155, 171 (2022) (citing Todd Feathers, This Company is Using Racially-Biased Algorithms to Select Jurors, VICE.COM MOTHERBOARD (Mar. 3, 2020), https://www.vice.com/en/article/epgmbw/this-company-is-using-racially-biased-algorithms-to-select-jurors.). One such tool is Momus Analytics, which was named by the National Law Journal as “one of its 2020 emerging legal technologies.” Id. Momus uses public records and the social media posts of prospective jurors in its algorithm, to “determine scores for ‘leadership,’ ‘social responsibility,’ and ‘personal responsibility.’” See Feathers, supra. For example, Momus uses race and ethnicity in its assessment of “leadership,” identifying “people of Asian, Central American, or South American descent” as “more likely to be leaders, while people who describe their race as ‘other’” as “less likely to be leaders.” Id.
While COMPAS and the Momus Analytics present real world examples of AI bias concerns for criminal defense attorneys and litigators, it is likely that other tools may impact the practice of law more generally.
Conclusion
Technological awareness with respect to AI requires lawyers to understand how the AI tools they intend to use work. This awareness also should include an understanding of how algorithms may be biased, and how such biased tools can discriminate or have a disparate impact on certain groups of people. Becoming educated about the AI tools you intend to use is a necessary step in weighing the benefits and risks of these tools and in exercising your own independent judgment.
As such, and as consistent with our prior guidance in regards to using AI, your use of these newer technologies requires careful evaluation and judgment so you can avoid any ethical pitfalls that may exist (or eventually become known), including those related to bias and discrimination.
This Ethics Corner Article was submitted for publication review to the NHBA Board of Governors at its October 24, 2024 Meeting. The Ethics Committee provides general guidance on the New Hampshire Rules of Professional Conduct and publishes brief commentaries in the Bar News and other NHBA media outlets. New Hampshire lawyers may contact the Committee for confidential and informal guidance on their own prospective conduct or to suggest topics for Ethics Corner commentaries by emailing the Ethics Committee Liaison at: ethics@nhbar.org
[i] The Ethics Committee also recommends reviewing the recent July 29, 2024 American Bar Association guidance on Generative Artificial Intelligence Tools from its Standing Committee on Ethics and Professional Responsibility, Formal Opinion 512. Not only is Formal Opinion 512 a helpful primer on the topic of generative AI, generally, it also addresses similar ethics rules, from the perspective of the Model Rules of Professional Conduct. Further, Formal Opinion 512 touches briefly on bias on p. 3 (“If the quality, breadth, and sources of the underlying data on which a GAI tool is trained are limited or outdated or reflect biased content, the tool might produce unreliable, incomplete, or discriminatory results.”). Formal Opinion 512 is available online through the ABA website: https://www.americanbar.org/content/dam/aba/administrative/professional_responsibility/ethics-opinions/aba-formal-opinion-512.pdf