
Law school dean calls for ‘rigorous fact checking’ to avoid artificial intelligence problems
A self-described “expert” on “misinformation” and artificial intelligence apologized after the AI program he used inserted fake citations into a legal brief. Several higher education experts, including a law school dean, told The College Fix what can be done to prevent similar situations in the future.
Stanford University Professor Jeff Hancock submitted a second court filing in a case concerning Minnesota’s law against “deepfakes,” which are artificial intelligence-created videos that seem realistic. The video can look realistic, making them a tool for campaigns who might hope to paint their opponent in a negative light.
Hancock, the editor of a journal about “misinformation,” played his own role in spreading fake news, when he submitted testimony that included “hallucinated citations,” meaning they were not real. In his initial filing, he declared himself an “expert” on “technology.”
But Hancock (pictured) later had to amend his filing to “acknowledge three citation errors,” after opposing counsel flagged the mistakes. He made the mistake because he left in blank citations which his program filled in by creating them out of thin air.
The Fix emailed the Stanford professor several times in the past weeks to ask for further comment on the situation, but he did not respond.
A University of Kansas law professor said AI models can remain stubborn and insist their citations are real.
“When you press [large language models] on it, sometimes they will double down and they will justify that this thing indeed is true,” Associate Dean of Graduate and International Law Andrew Torrance told The Fix on a phone interview.
“So, you have to do rigorous fact checking. You really should check every sentence that an AI generates,” Professor Torrance told The Fix.
He told The Fix it should be clear how AI was used, a point he made along with several other professors in a 2023 paper.
“Clearly disclose the use of AI-assisted writing tools in your work,” Torrance wrote, along with University of California Irvine Professors Bill Tomlinson and Rebecca Black, in a paper title “Chat GPT and Works Scholarly.”
The trio also said authors should disclose what “tools and techniques” they used in the research.
“Be transparent about the limitations of AI-assisted writing tools,” the professors also wrote. “This includes describing any potential biases or inaccuracies that may be present in the text generated by the tool.”
Use ChatGPT as a ‘whiteboard,’ but for little else, expert says
The spokesman for a higher education group said the use of AI should be minimized.
“If you don’t have somebody else to bounce ideas off of, it can be a useful tool,” Chance Layton, communications director for the National Association of Scholars, told The Fix via email.
When writing academic papers, Layton said AI should “only be used as a whiteboard.” He said the Stanford professor’s use of AI contributes to “the lack of confidence in expertise” which Layton called a “big problem.”
The rise of artificial intelligence has caused some concerns about cheating. In 2022, The Fix interviewed a student who used ChatGPT on two final exams, earning As on both.
“I used it for my multiple choice finals, two of them, and got a 95 on one of them and the other one, a 100,” he told The Fix. “Half the kids in my class used it,” the student said.
ChatGPT has created stories before, in several cases making up stories about law professors being accused of sexual assault.
In 2023, legal scholar Eugene Volokh found that ChatGPT would create stories about sexual assault when prompted, fabricating baseless accusations about George Washington University Professor Jonathan Turley and even citing a non-existent Washington Post story.
The Fix ran a similar test to Volokh’s and also received five examples of sexual harassment against professors, though none of them were true.
All cited the New York Times, Washington Post, or other publications. However, ChatGPT declined to provide examples when prompted again.
MORE: ChatGPT is politically biased to the left
IMAGE: Psych of Tech Institute/YouTube
Like The College Fix on Facebook / Follow us on Twitter

Please join the conversation about our stories on Facebook, Twitter, Instagram, Reddit, MeWe, Rumble, Gab, Minds and Gettr.