Toby Murray
@tobycmurray.bsky.social
810 followers 230 following 170 posts
Professor at University of Melbourne and School of Computing and Information Systems cyber lead; Director @dsi-vic.bsky.social; Oxford DPhil (@compscioxford.bsky.social; @hertfordcollege.bsky.social). Cyber, verification, etc. He/him
Posts Media Videos Starter Packs
tobycmurray.bsky.social
The latest chapter in the ANOM story, in which the FBI and AFP deployed a fake secure phone system to spy on organised crime. The Australian High Court has unanimously ruled the operation legal and data collected can be used as evidence in prosecutions www.abc.net.au/news/2025-10...
High Court endorses use of encrypted phone app to monitor crime figures
The High Court has ruled on the use of information gathered through the AN0M app, which was developed by the Australian Federal Police for surveillance.
www.abc.net.au
tobycmurray.bsky.social
This is a feature, not a bug. Rare events are, by definition, more informative than common ones.
The formula for Shannon entropy.
tobycmurray.bsky.social
Congratulations to you, Ally, and to TOPLAS. Bright times ahead
tobycmurray.bsky.social
There’s a reason that human-to-human touch produces such strong emotional reactions, that the slightest unintended brush between humans elicits “sorry”. Humanoids will surely be unsafe to operate with humans absent such fine sensing.
Reposted by Toby Murray
tmiller-uq.bsky.social
The deadline for my postdoc on scalable clinical decision support is closing in 1 week: 4 October (Australian Eastern standard Time). Please share with anyone that you think would be interested
tmiller-uq.bsky.social
I'm hiring again! Please share. I'm recruiting a postdoc research fellow in human-centred AI for scalable decision support. Join us to investigate how to balance scalability and human control in medical decision support. Closing date: 4 October (AEST).
uqtmiller.github.io/recruitment/
Recruitment
uqtmiller.github.io
Reposted by Toby Murray
m-dodds.bsky.social
I wrote about Claude Code, which to my absolute astonishment is quite good at theorem proving. For people who don't know theorem proving, this is like spending your whole life building F1 engines and getting lapped by a Tesco's shopping trolley www.galois.com/articles/cla...
Claude Can (Sometimes) Prove It
www.galois.com
Reposted by Toby Murray
tmiller-uq.bsky.social
I'm hiring again! Please share. I'm recruiting a postdoc research fellow in human-centred AI for scalable decision support. Join us to investigate how to balance scalability and human control in medical decision support. Closing date: 4 October (AEST).
uqtmiller.github.io/recruitment/
Recruitment
uqtmiller.github.io
Reposted by Toby Murray
andyperfors.bsky.social
My university is now on Bluesky 💙
unimelb.bsky.social
Hello 👋

We're the official #UniMelb account!

Follow us for news, updates and information about UniMelb. For now, enjoy the blue skies over our Parkville campus 💙
tobycmurray.bsky.social
Neat paper showing that automated bug fixing systems can be manipulated into introducing security flaws (eg. reverting CVE fixes) into your code. arxiv.org/pdf/2509.05372
arxiv.org
tobycmurray.bsky.social
You’d think that the car company run by the bloke who runs a rocket company would have learned from Apollo 1
danahull.bsky.social
🧵
1/ What is this photograph?

It's a custom-made mask fitted for a software engineer in northern Virginia who suffered third-degree burns on her face when the Tesla Model Y she was in crashed and caught on fire. A heroic crowd of bystanders could not open the doors
photograph of burn mask
tobycmurray.bsky.social
I’m hindsight it will seem obvious I think that perhaps the most under appreciated factor that kept memory safety so dangerous for so long was that no single company had control over the hardware, OS and compiler.
Reposted by Toby Murray
ccanonne.github.io
My petition to the 🇦🇺 Australian government: make part-time PhD students' stipends tax exempt!

📋 Read and sign here: www.aph.gov.au/e-petitions/...
⏰ Deadline: October 1
e-petitions
e-petitions
www.aph.gov.au
tobycmurray.bsky.social
Just as with social media in 2010s, the current environment seems to grant new technologies a presumption of innocence when it comes to harms. We need high quality evidence; or risk blunt regulation eg UK Online Safety Acy, AU Socmed ban etc
tobycmurray.bsky.social
I agree with the conclusion but quibble:
1. The lack of evidence linking to ChatGPT in the case of the VC’s purported public psychosis
2. The correlational evidence in the linked RCT being used to argue causation
3. Overly simplistic solutions like aborting conversations about suicide
tobycmurray.bsky.social
Thoughtful reflections on how universities can and must adapt to the rise of generative AI, including by returning to ancient practices. www.nytimes.com/2025/08/26/o...

Left unaddressed is how scalable online education will survive the rise of AI without sacrificing academic integrity
Opinion | Students Hate Them. Universities Need Them. The Only Real Solution to the A.I. Cheating Crisis.
www.nytimes.com
Reposted by Toby Murray
tom.eastman.nz
Can't even build death star anymore, because of ewok
Reposted by Toby Murray
ccanonne.github.io
A university and an academic publisher walk into a bar.

The publisher orders a pint, sells it back to the university, asks the barman to pay the bill.
tobycmurray.bsky.social
Given that so much of what passes for professional boils down to looks and feel, it’s not surprising that GenAI should be used to produce professional reports
tobycmurray.bsky.social
Let’s be real. If one wants proper, rigorous research, then one asks a university and not a consultancy firm. I’m not sure that GenAI is materially damaging the quality of the average consultancy report. Meanwhile … www.theguardian.com/australia-ne...
Consultancy firms win nearly $1bn in Australian contracts in past year despite new outsourcing rules, research shows
Greens senator Barbara Pocock says figures do not match Labor government’s rhetoric about cutting back on use of consultants
www.theguardian.com
tobycmurray.bsky.social
A key corollary is that big mistakes early in a chat can lead to compounding errors later on. This is perfectly exemplified by the scenario in the linked story: a sycophantic early response that indulges a user’s false hopes, on which the LLM repeatedly doubles down as the chat progresses.
tobycmurray.bsky.social
Put another way: the longer the chat (or the more “memories” it draws on from prior chats) the more likely the LLM is high on its own bullshit.
tobycmurray.bsky.social
A forensic article on LLM induced hallucinations. www.nytimes.com/2025/08/08/t... A much simpler explanation is that errors *accumulate* in long LLM conversations because the LLM’s next answer deep in a chat is a literal function of all of the prior bullshit it has generated so far.
Chatbots Can Go Into a Delusional Spiral. Here’s How It Happens.
www.nytimes.com