No ChatGPT in my court: The judge orders that all AI-generated content must be declared and controlled

Photo of author

By Webdesk


Few lawyers would be foolish enough to let an AI make their arguments, but one already did, and Judge Brantley Starr is taking steps to ensure the debacle doesn’t repeat itself in are courtroom.

The Texas federal judge added a requirement that any attorney appearing in court must certify that “no portion of the filing was created by generative artificial intelligence,” or if it was, that it was “by a human being” has been checked.

Last week, attorney Steven Schwartz allowed ChatGPT to supplement his legal research into a recent federal file, which gave him six cases and relevant precedent — all of which were completely hallucinated by the language model. He now “deeply regrets” doing so, and while the nationwide coverage of this blunder has likely prompted other attorneys to try again, Judge Starr is not taking any chances.

At the federal site for the Northern District of Texas, Starr, like other judges, has the ability to create specific rules for his courtroom. And recently added (although it’s unclear if this was in response to the aforementioned submission) is the “mandatory certification related to generative artificial intelligence”. Eugene Volokh was the first to break the news.

All attorneys appearing in court must file a certificate on the docket confirming that no part of the filing has been drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence has been checked for accuracy, using print reporters or traditional legal databases, by a human.

A form has been added for lawyers to sign, noting that “citations, citations, paraphrased assertions, and legal analysis” are all covered by this prohibition. Since summary is one of AI’s strengths, and finding and summarizing precedents or previous cases is something that has been advertised as potentially useful in legal work, this may come into play more often than expected.

Whoever drafted the memorandum on this matter in Judge Starr’s office has a finger on the pulse. The certification requirement includes a fairly well-informed and convincing explanation of its necessity (line breaks added for readability):

These platforms are incredibly powerful and have many uses in the law: form separations, discovery requests, proposed errors in documents, anticipated questions in pleadings. But legal briefing is not one of them. This is why.

These platforms are prone to hallucinations and bias in their current state. About hallucinations, they make things up – even quotes and quotes. Another issue is reliability or bias. While lawyers swear an oath to set aside their personal biases, prejudices and beliefs to faithfully enforce the law and represent their clients, generative artificial intelligence is the product of programming devised by people who did not have to swear such an oath .

As such, these systems are not loyal to any customer, the rule of law, or the laws and constitution of the United States (or, as stated above, the truth). Unfettered by any sense of duty, honor or justice, such programs act by computer code rather than belief, based on programming rather than principles. Any party that believes a platform has the required accuracy and reliability for legal briefing can request leave and explain why.

In other words, be prepared to justify yourself.

While this is just one judge in one court, it wouldn’t be surprising if others adopted this rule as their own. While, as the court says, this is a powerful and potentially useful technology, its use should at least be clearly explained and checked for accuracy.



Source link

Share via
Copy link