A/Prof Narrelle Morris
@narrellemorris.bsky.social
590 followers 210 following 340 posts
Curtin Law School. Legal history, statutory interpretation, research and writing. Japan. Permanently at risk of being squashed by books in my office.
Posts Media Videos Starter Packs
Pinned
narrellemorris.bsky.social
My article “Current Approaches to the Use of Generative AI in Australian Courts and Tribunals: Should Australian Judges Have Guidelines Too?” is now out in the Journal of Judicial Administration!

Short answer: yes.

Slightly longer answer: nobody should be using it for legal research or writing.
narrellemorris.bsky.social
You’re kidding us, right? Reporting on AI and not aware UNTIL
NOW about Gen AI “hallucinations”? Is this what happens when you only read press releases from the AI boosters and don’t stop to think about any of their claims?

www.smh.com.au/business/wor...
narrellemorris.bsky.social
It should come with a ban on govt consultancy or tendering, not just a refund.
maximumwelfare.bsky.social
#BREAKING 🚨 Deloitte to refund government, admits using AI in $440k report into mutual obligations issues.

Fake quotes from Federal Court case that ended Robodebt deleted from new report in Friday DEWR dump.

📰 AFR

✍️ @paulkarp.bsky.social

✍️ @edmundtadros.bsky.social

🗣️ @chrisrudge.bsky.social
HEADLINE: Deloitte to refund government, admits AI errors in $440k report Deloitte Australia will issue a partial refund to the federal government after admitting that artificial intelligence had been used in the creation of a $440,000 report littered with errors including three nonexistent academic references and a made-up quote from a Federal Court judgement.

A new version of the report for the Department of Workplace Relations (DEWR) was quietly uploaded to the department’s website on Friday, ahead of a long weekend across much of Australia. It features more than a dozen deletions of nonexistent references and footnotes, a rewritten reference list, and corrections to multiple typographic errors.

(photo of Deloitte Australia HQ) Deloitte Australia has made almost $25 million worth of deals with the Department of Workplace Relations since 2021. Photographer Dion Georgopoulos The first version of the report, about the IT system used to automate penalties in the welfare system such as pauses on the dole, was published in July. Less than a month later, Deloitte was forced to investigate the report after University of Sydney academic Dr Christopher Rudge highlighted multiple errors in the document.

At the time, Rudge speculated that the errors may have been caused by what is known as “hallucinations” by generative AI. This is where the technology responds to user queries by inventing references and quotes. Deloitte declined to comment.

The incident is embarrassing for Deloitte as it earns a growing part of its $US70.5 billion ($107 billion) in annual global revenue by providing advice and training clients and executives about AI. The firm also boasts about its widespread use of the technology within its global operations, while emphasising the need to always have humans review any output of AI. SUBHEADING: Deleted references, footnotes

The revised report has deleted a dozen references to two nonexistent reports by Professor Lisa Burton Crawford, a law professor at the University of Sydney, that were included in the first version. Two references to a nonexistent report by Professor Björn Regnell, of Lund University in Sweden, were also deleted in the new report.

Also deleted was a made up reference to a court decision in a leading robo-debt case, Deanna Amato v Commonwealth.

The new report has also deleted a reference to “Justice Davis” (a misspelling of Justice Jennifer Davies) and the made-up quote from the nonexistent paragraphs 25 and 26 in the judgement: “The burden rests on the decision-maker to be satisfied on the evidence that the debt is owed. A person’s statutory entitlements cannot lawfully be reduced based on an assumption unsupported by evidence.”
narrellemorris.bsky.social
Anyone using Gen AI thinking it makes their legal research more efficient is actually cutting corners and, unless they independently verify every word of it, which is time consuming (not an efficiency), they are putting their ability to keep practising law at risk. As many lawyers are finding out.
narrellemorris.bsky.social
Having a law degree is nice, I do too. I’ve taught legal research using tech for 15 years. As you noted, I actually do research to justify my conclusions. Feel free to read all the footnotes in my article. If you disagree, do research and publish it. Otherwise it’s just your personal anecdata.
narrellemorris.bsky.social
Unless you’re a lawyer or a legal academic, I don’t think you have the knowledge or experience to claim Gen AI makes legal research more efficient, especially given your claimed expertise around AI is to “productionise” it, whatever that means.
narrellemorris.bsky.social
which seems to include portraying anything but joyous acceptance of it with being a Luddite. Of course it’s here to stay. What’s important is understanding its limitations and the risks posed, not rejecting those.
narrellemorris.bsky.social
It’s not narrow minded to ask questions about the productivity and efficiency (and accuracy) claims by those selling Gen AI, particularly when they now have billions spent and little recourse to recoup those costs unless they dig deeper and harder,
narrellemorris.bsky.social
people not having sufficient understanding of how it works and blindly trusting its output, and ultimately not interrogating the claims about productivity and efficiency coming straight from AI bubble boosterism. While accepting the rampant economic, enviro, data, and human costs of it.
narrellemorris.bsky.social
Efficiency is a common claim but only works if you can trust the output, which you can’t. Even closed datasets like in Lexis and Westlaw hallucinate.

Tech has a place in legal work, no question. Invaluable for high volume discovery. But the most dangerous thing about Gen AI is …
narrellemorris.bsky.social
Gen AI is not effective for legal research is the key point, regardless of whether it is free ChatGPT or very expensive professional legal databases with it embedded. This will not change while its “intelligence” is based on statistical probability of words, one after the other.
narrellemorris.bsky.social
Doesn’t use my new article below but an interesting resd.

My question to those thinking Gen AI will replace research skills is how do you think law students will understand the fundamentals of what legal research is, and requires, if reduced to simply prompt an AI.

www.abc.net.au/news/2025-10...
narrellemorris.bsky.social
It so helpful to gain insight into people, although when you’re following a judge around like this, one cannot help be appalled at the risk it placed them at. It’s entirely inconceivable today to know home addresses and which hotel they were staying at on vacation etc.
narrellemorris.bsky.social
As a statutory interpretation instructor, I’d really love to see that draft bill, explanatory memorandum et al. Talk about stepping outside your lane (in more ways than one).
narrellemorris.bsky.social
This is one of the lesser known (but tragic) consequences of Gen AI. It’s not just about theft of IP, which is usually mentioned, it’s about damage to the information infrastructure of underfunded archives, libraries and museums etc. with long term consequences for them and for researchers.
ilikeoldbooks.bsky.social
by the way, all those benign AI bots crawling the internet for for-profit LLMs, yeah it turns out when 9,000 hit your archive catalogue or image database all at once they break the system. This is an emerging sector issue.

The last weeks have literally seen humans labouring to feed the machines...
narrellemorris.bsky.social
Just don’t forget that you can be FOI’d in Teams too.
narrellemorris.bsky.social
Notably blameshifting excuses given the level of professional guidance and media coverage about the hazards of Gen AI in legal work.

More interesting: clear indicators that both Westlaw and Lexis AI legal research products, as already known, similarly “hallucinate”.
jasonkoebler.bsky.social
Spent many (many!) hours pulling legal explanations and apologies from lawyers who were caught using AI that hallucinated in which they explained to a judge why they used AI. The explanations are astonishing and extremely good:

www.404media.co/18-lawyers-c...
18 Lawyers Caught Using AI Explain Why They Did It
Lawyers blame IT, family emergencies, their own poor judgment, their assistants, illness, and more.
www.404media.co
narrellemorris.bsky.social
The Supreme Court of Qld has now issued a relevant practice note and updated Guidelines for Non-Lawyers.

The practice note focuses on the practitioners’ duty to provide accurate information to the court, ie verification. It doesn’t seek to control the use of Gen AI itself.
narrellemorris.bsky.social
My article “Current Approaches to the Use of Generative AI in Australian Courts and Tribunals: Should Australian Judges Have Guidelines Too?” is now out in the Journal of Judicial Administration!

Short answer: yes.

Slightly longer answer: nobody should be using it for legal research or writing.
narrellemorris.bsky.social
No live link yet but it should be findable in Westlaw AU. Please message me if you don’t have access and you would like a copy.
narrellemorris.bsky.social
My article “Current Approaches to the Use of Generative AI in Australian Courts and Tribunals: Should Australian Judges Have Guidelines Too?” is now out in the Journal of Judicial Administration!

Short answer: yes.

Slightly longer answer: nobody should be using it for legal research or writing.
narrellemorris.bsky.social
The two definitions of “impact” on show. 1) Loud and gets attention or 2) Actually influences knowledge or actions taken. You can have both 1 and 2. But mostly we see a lot of 1 and never hear about the instances of 2.