Misinformation expert uses AI to draft testimony with fake claims about AI

This post was originally published on Hespress

You will shortly be re-directed to the publisher's website

Jeff Hancock, a Stanford misinformation expert, admitted to using artificial intelligence (AI) to draft a court document that included fake citations about AI.

The document was submitted in a case challenging a new Minnesota law prohibiting the use of AI to mislead voters before an election.

The law’s challengers noticed the fake citations and petitioned the judge to dismiss Hancock’s declaration.

The state of Minnesota had paid Hancock $600 per hour for his services.

Hancock explained that the fake citations were likely the result of using ChatGPT-4o while drafting the declaration.

He stated that the AI “hallucinated” the citations, a known issue with AI tools, and that he did not intend to mislead the court.

The Attorney General’s Office, unaware of the errors until the opposition raised them, has requested the judge allow Hancock to correct the citations and resubmit the document.

In his defense, Hancock argued that AI is increasingly used in academic and professional settings for drafting and research, citing programs like Microsoft Word and Gmail that integrate AI tools.

He further emphasized that this practice is widespread among academics and students.

A New York court previously ruled that lawyers must disclose AI use in expert opinions, and sanctions have been imposed on those submitting AI-generated documents with false information.

Hancock, a leading expert on misinformation, did not comment on whether AI was used in previous cases he worked on.

The post Misinformation expert uses AI to draft testimony with fake claims about AI appeared first on HESPRESS English – Morocco News.