By Misty Griffith
The evolution of language-based, generative artificial intelligence (AI) is as rapid as the generation of text on the screen by ChatGPT, possibly the most well-known AI of this kind. My Bar News colleagues, Tom Jarvis, Donna Parker, and I watched in awe as ChatGPT created a succinct and intelligible response to Tom’s prompt, “Can you write a short article about using ChatGPT to write legal briefs?” Words flew across the screen producing a one-page response in less than a minute. The result is repetitious, over-simplified, and lacks eloquence; yet it is simultaneously amazing. (Read ChatGPT’s unedited answer in the sidebar below and judge for yourself.)
As I commenced writing this article, it was a cautionary tale about the shortcomings of ChatGPT 3.5 (GPT-3.5). Subsequent developments, most notably the launch of the new and significantly improved ChatGPT 4.0, rendered my research out of date. New developments have occurred weekly, and sometimes daily, as I have worked on this article. Now, the cautionary tale is that noteworthy developments may have occurred subsequent to the submission of this article.
OpenAI unveiled the GPT-3.5 platform for public use on November 30, 2022. However, usage really took off on February 7, 2023, when Microsoft Bing integrated ChatGPT into their search engine. GPT-3.5 showed potential as a time-saver but produced far-from-reliable results for legal research. GPT-3.5 failed the bar exam, but more problematic, it would sometimes reference nonexistent laws or cases.
On March 14, 2023, OpenAI debuted ChatGPT 4.0 (GPT-4), which is exponentially more advanced than its predecessor.
Notably, GPT-4 passed the Uniform Bar Exam (UBE) with flying colors. The chatbot’s score of 297 was in the 90th percentile, ranking it among the top 10 percent of exam takers. While it is not error-free, neither are the humans. GPT-4 got 75.7 percent of the multiple-choice questions correct compared to the human test takers average of 68 percent.
While passing the bar exam is an impressive feat, bar passage is a minimum requirement for practicing law. GPT-4 and other similar generative AI platforms do not possess the creativity, strategic thinking, empathy, and passion of a good attorney. Likewise, while GPT-4 can generate text, its writing lacks sophistication and nuance.
The enhanced capabilities of GPT-4 have sparked concerns that AI technology may eventually replace human lawyers. A March 16, 2023 survey of 4,180 respondents (including 1,176 attorneys, 1,239 law students, and 1,765 consumers) conducted by LexisNexis Legal and Professional, found that 39 percent of attorneys, 46 percent of law students, and 45 percent of consumers believe that generative AI tools will significantly transform the practice of law.
Attorney John Weaver, chair of the Artificial Intelligence Practice at McLane Middleton and a member of the Cybersecurity and Privacy Group says, “I think one of the reasons ChatGPT is getting so much attention right now is both because it seems pretty revolutionary, and is in many ways, but also because it’s coming for white collar workers. The fact that there’s now the possibility of software that can generate a lot of their work product, a lot of the things that they work on, is unnerving to a lot of people that consider these things, write about them, or might have thought that their jobs were safe.” Weaver, a prominent voice in the field of artificial intelligence law, is on the Board of Editors for RAIL: The Journal of Robotics, Artificial Intelligence & Law.
Also in March, Casetext, a legal technology company, introduced Co-Counsel, the first AI legal assistant. Powered by GPT- 4, Co-Counsel is a tool designed to automate and streamline legal research and drafting of legal documents. Utilizing AI for research and first drafts can save a significant amount of time. While lawyers might use AI technology as a starting point for research or drafts, it is imperative that they verify any information generated.
The ChatGPT website includes the following warnings about its limitations: “May occasionally generate incorrect information. May occasionally produce harmful instructions or biased content. Limited knowledge of the world and events after 2021.”
A major caveat for anyone doing legal research is that AI may quote a dissent in a case without indicating that the quote is from a dissenting opinion. Additionally, it may cite cases which have been overturned. AI-generated research may be a starting point, but it is not a reliable ending point.
The use of ChatGPT, Co-Counsel, Google Bard, or other generative AI creates ethical concerns which will need to be addressed. Attribution of work is an obvious concern, and the legal community will need to grapple with implications relating to attorney-client privilege if disclosing confidential client information to an AI chatbot. Though AI is not sentient, at some level there is human oversight which raises privacy concerns for information shared. Rule of Professional Conduct 1.6 regarding confidentiality of information should come into play and would seem to necessitate a client’s informed consent if an attorney plans to utilize generative AI on a specific client matter.
At the 2023 ABA Midyear Meeting in February, the House of Delegates adopted a resolution addressing attorney accountability and transparency regarding AI. Resolution 604 sets forth these guidelines:
- Developers of AI should ensure their products, services, systems, and capabilities are subject to human authority, oversight, and control.
- Organizations should be accountable for consequences related to their use of AI, including any legally cognizable injury or harm caused by their actions, unless they have taken reasonable steps to prevent harm or injury.
- Developers should ensure the transparency and traceability of their AI and protect related intellectual property by documenting key decisions made regarding the design and risk of data sets, procedures and outcomes underlying their AI.
While there are legitimate concerns about appropriate use of AI in the legal field, if used appropriately, there are ways in which AI could increase access to justice. In his timely article, The Implications of ChatGPT for Legal Services and Society, Andrew Perlman, Dean and Professor of Law at Suffolk University Law School opines, “Less complex legal matters may see an even more dramatic shift, with AI tools helping to address the public’s enormous unmet civil legal needs. Technology offers a promising way to address those needs, both through self-help resources and by enabling lawyers to reach far more clients than is currently possible.”1 Dean Perlman is a leading proponent for teaching law students to engage with and utilize AI tools responsibly.
The genie is out of the bottle. It is now the responsibility of legal professionals to carefully consider how to harness the power and potential of AI ethically, and in ways that may enhance the future of the legal profession.
Endnote
- “The Implications of Chat GPT for Legal Services and Society” Andrew Perlman, Dec. 5, 2022, published online at Social Science Research Network ssrn.com and Harvard Law School Center on the Legal Profession clp.law.harvard.edu.