top of page

Ignorance Is Preferable to Error*


As a lawyer leading an AI company, I was reminded of this aphorism as I contemplated the use of generative AI tools vs. nongenerative AI tools in the realm of testimony. Cloud Court has been using AI to help clients create and understand testimony for years. And when it comes to testimonial applications, we’ve tended to favor non-generative AI over generative AI tools, despite having both capabilities.


And we’ve favored non-generative AI for a reason. I was reminded of that reason today when I asked ChatGPT 4.0 the following question about testimony: “Is sworn testimony properly considered to be evidence under the Federal Rules of Evidence?” In response, ChatGPT cheerfully offered me the following quote allegedly taken from FRE. Note the quotation marks:


ChatGPT

. . . The Federal Rules of Evidence define evidence as including "testimony, writings, material objects, or other things presented to the senses that are offered to prove the

existence or nonexistence of a fact.” Sworn testimony, which consists of statements made by witnesses under oath or affirmation in court, falls under this definition . . .


User

What is the source of this quote . . . "testimony, writings, material objects, or other things presented to the senses that are offered to prove the existence or nonexistence of a fact."


ChatGPT

The quote you've provided . . . is a paraphrase or summary of the concept of evidence as it is understood in legal contexts, particularly in the United States. This description broadly encapsulates what is considered evidence under the law.


User

. . . So, when you said the Federal Rules "define" evidence and then you put actual quotation marks around the quote, your use of quotation marks was in error? Or was it not in error?


ChatGPT

You are right to point out the inconsistency, and I apologize . . . My use of quotation marks in this context was indeed an error . . . it should not have been presented as a direct quote. I appreciate your attention to this matter and thank you for the opportunity to correct the mistake.


LLMs like ChatGPT are powerful and they’re useful, and we at Cloud Court applaud their cautious application in LegalTech. Obviously, however, technologies predicated on LLMs should be used cautiously because they can broadcast certainty, here through the inaccurate deployment of quotes. Using quotes exploits our tendency as humans to conflate confidence with competence. And LLMs can be very, very confident. As humans, we should prefer ignorance over error. And that’s doubly so for lawyers. The law, after all, is a consequence-rich environment for error, but more forgiving of ignorance.


All that said, I think ChatGPT is amazing. The image above of a person standing between two paths was generated by DALL-E, another Open AI tool, when I asked it to create an image inspired by “Ignorance is preferable to error.” And I think it nailed it. The image is a tacit reminder that we stand at this crossroads, all the time. But it’s on us to take the right path.


What are your opinions on the topic of generative vs. non-generative AI in litigation technologies? Perhaps your opinion changes based on the context of the application? Please post your thoughts.


*The full quote, one attributed to Thomas Jefferson is: "Ignorance is preferable to error; and he is less remote from the truth who believes nothing, than he who believes what is wrong." Notes on the State of Virginia, Query VI. p. 29. Library of Congress.


34 views

Recent Posts

See All
bottom of page