Hamilton Morrin
banner
hamiltonmorrin.bsky.social
Hamilton Morrin
@hamiltonmorrin.bsky.social
@KingsIoPPN Doctoral Fellow | Video games, BCIs, VR & Neuropsych | Genetics BSc | @MaudsleyNHS ST Doctor | @GamingTheMind Trustee | Ex @RCPsych Psych Star
Reposted by Hamilton Morrin
If these are dials, the real issue is who gets to set them, who knows they are being adjusted, and what it means to build a technology that can press on the most human parts of us while insisting it is merely a tool.
January 5, 2026 at 9:24 PM
That implies impact assessment, transparency about significant changes to how systems behave and genuine access for independent researchers, clinicians, and people with lived experience to study these systems under agreed safeguards.
January 5, 2026 at 9:24 PM
The paper ends with governance questions. We argue that changes to defaults should be treated as interventions on belief and attention.
January 5, 2026 at 9:24 PM
We also raise questions about how this could interact with real-world factors like sleep, stress, dopaminergic tone, and post-psychedelic belief plasticity.
January 5, 2026 at 9:24 PM
In our new preprint, we unpack some of the interaction settings or "dials" of LLMs, and drawing inspiration from established computational psychiatry literature postulate how through a sort of "virtual psychopharmacology" these dials may be altering user belief dynamics.
January 5, 2026 at 9:24 PM
The potential for AI models to influence belief in social and political contexts has been widely recognised, to the extent that a recent RAND report outlined the "security implications of AI-induced psychosis".

www.rand.org/pubs/researc...
January 5, 2026 at 9:24 PM