Henry Fraser
@henrylfraser.bsky.social
9 followers 12 following 9 posts
Law academic working on AI safety and responsibility in networked digital value chains
Posts Media Videos Starter Packs
henrylfraser.bsky.social
My breakdown of Australia's new regulation for AI companion chatbots

www.henryfrasertechlaw.com/post/new-reg...
henrylfraser.bsky.social
🩷 Good governance would require and incentivise AI/ADM providers and deployers to be *mensches* (compassionate, humane, decent, capable of imagination), and not to treat compliance as mere process and box-ticking. (Everyone)
henrylfraser.bsky.social
🫂 AI and ADM providers and deployers should have a statutory duty to take reasonable care to prevent foreseeable harm (general, positive, vague, and thus context sensitive), in addition to specific process-based risk controls such as those in the EU AI Act. (Me, Kim).
henrylfraser.bsky.social
⚖️ Reforming privacy law with a 'fair and reasonable' requirement for use of personal info is a critical step to reduce downstream risks of harm from AI and ADM in social services (Kim, Sam).
henrylfraser.bsky.social
📜 Governance is more than rules. Implementation determines outcomes, and many orgs providing social services seriously struggle with implementation (Kath)
henrylfraser.bsky.social
➰ 'Human-In-The-Loop' is not a fix-all for automated-decision-making. One 'human' in an org is not going to push back on the systems, practices and assumptions that led the org to irresponsible/penny-pinching/unfair deployment of ADM in the first place (Jake)
henrylfraser.bsky.social
Great panel yesterday on 'Governing automated decision-making for positive social services' with @kimweatherall.bsky.social , Samantha Floreani, Jake Goldenfein, and Kath Albury, organised and chaired by Christine Parker at @admscentre.org.au . The highlights of the conversation (in thread):
Reposted by Henry Fraser
jtlg.bsky.social
Ryan Broderick has been arguing that AI slop is to Trumpism as futurism was to fascism, and he just added that AI’s remixing of its training data is “an aspirational vision for the future that’s completely defined by nostalgia.”

(Just like fascism was!)

www.garbageday.email/p/trump-s-bi...
But it doesn't really matter what the original intent of @pro_ai_artist was. It seems clear that Al art's biggest utility right now is aspirationalism. The ability to quickly and cheaply generate a vision of the future for Trump supporters. And I've written before about how Al art is to modern fascism what futurism was to 20th-century fascism, but the Homeland Security X account posting a Thomas Kinkade painting - and a right-wing user getting mad at them and Al-generating a more “futuristic" version - unlocked something for me.
Unlike 20th-century futurism, Al art is, by definition, cobbled together out of previous art styles. An Al model cannot create anything new, only remix what it's ingested. Which is an apt metaphor for the Trump administration. An aspirational vision of the future that's completely defined by nostalgia. A complete cultural dead end.
henrylfraser.bsky.social
After marking too many papers riddled with AI generated nonsense, I worry about a future of *artificial general stupidity*. Will business, government and just about everyone succumb to the lure of cheap AI tools that aren't fit for purpose?

www.henryfrasertechlaw.com/post/ai-and-...
AI and the temptation to cheat at everything
At this point it seems fair to say that the lure of cheating is not a fringe sociological bug, found around the margins of AI deployment. It's an essential feature of this new technology. The tech is ...
www.henryfrasertechlaw.com