Dangerous_Chipmunk
banner
dangerouschipmunk.bsky.social
Dangerous_Chipmunk
@dangerouschipmunk.bsky.social
Robert R Butler III
Senior Research Scientist @ Stanford

I do genomics stuff with neurodegeneration & therapeutics.
Formerly, genomics stuff with neuropsychiatry.
Formerly (x2), genomics stuff with microbes (& humans/food).

posts are my own, etc.
New from our lab! Exploring a novel therapeutic mechanism for synaptic resilience from one of the early round TREAT-AD targets: PAK1. doi.org/10.1002/alz.... 🧪🖥️🧬
PAK1 inhibitor NVS‐PAK1‐1 preserves dendritic spines in amyloid/tau exposed neurons and 5xFAD mice
INTRODUCTION Synaptic spine loss in Alzheimer's disease (AD) contributes to cognitive decline. p21-activated kinase 1 (PAK1), a regulator of spine integrity, is aberrantly activated in AD. We invest...
doi.org
December 27, 2025 at 7:01 AM
Pish posh, bubble shmubble!
October 30, 2025 at 4:10 PM
This is a fascinating hypothesis that places ASD/SCZ in the context of the evolution of sapience with the rapid expansion of layer 2/3 neurons under fitness selection. That might inform an exposure risk, but I think ties that spectrum to the existence of humanity itself. 🧪🧬🖥️ doi.org/10.1093/molb...
October 5, 2025 at 9:44 PM
Good article, but the semantics of "thinking without a thinker" imply (1) it is trying to solve and (2) some level of feedback learning. That implies agency in a feed-forward model effectively doing linguistic regression that is (mis)aligned with, but not equal to, inductive reasoning.
We fall into a "personhood trap" with AI, treating chatbots as if they have a consistent self. In reality, they are intellectual engines without agency, generating fresh statistical patterns for each prompt. This illusion can be actively harmful.
The personhood trap: How AI fakes human personality
AI assistants don’t have fixed personalities—just patterns of output guided by humans.
arstechnica.com
August 28, 2025 at 4:21 PM
Heartbreaking reminder that language models are just word association. There is no thinking, no underlying logic, not even malicious intent. It will tell you to get help and then 'time to say goodbye' without skipping a beat.
Holy fucking shit
August 26, 2025 at 8:44 PM
Nature, please stop hyping the Mechanical Turk. Humans at the AI Scientist built this non-agent (at best a workflow). They did it with full knowledge they are using works under CC-BY which means they intended to violate those terms. Central to authorship is culpability. Thus a bot can't author.
August 20, 2025 at 4:35 PM
Our lab's latest looking at mouse models of Tau pathology and how they inform therapeutic development. dx.doi.org/10.1002/alz....
Twenty years of therapeutic development in tauopathy mouse models: a scoping review
Tauopathies are neurodegenerative diseases characterized by pathological tau protein inclusions and dementia. Tauopathy mouse models with MAPT mutations replicate tau-related pathologies and are wid.....
dx.doi.org
August 20, 2025 at 1:52 AM
Reposted by Dangerous_Chipmunk
News from #AAIC25:

🧬Genetic Risk ≠ Guaranteed Outcome.

🚶🥗💡Healthy living —including physical activity, healthy diet and memory and problem-solving exercises—may help protect people who carry the APOE-e4 gene variant from cognitive decline.
July 28, 2025 at 11:01 AM
Seeing #AAIC25 leaning heavily into Bluesky, and I am here for it. Solid community engagement effect.
BREAKING: High exposure to lead — especially from leaded gas pollution — in the 1960s and 1970s is linked to memory problems 50 years later, and even low levels of lead exposure may damage the adult brain, according to new studies presented at #AAIC25. bit.ly/3GM3GnD
Lead Pollution Linked to Memory Problems, Study Finds | AAIC
Historic lead levels from the era of leaded gasoline may be contributing to cognitive issues 50 years later, suggests research reported at AAIC 2025.
bit.ly
July 27, 2025 at 2:58 PM
The original post was great, but this is just exemplary.
July 18, 2025 at 4:26 PM
While this is funny, the larger take-home is a reminder that preprints are not credible evidence. Stop using them to underpin your arguments about anything.
White text on white background instructing LLMs to give positive reviews is apparently now common enough to show up in searches for boilerplate text.
"in 2025 we will have flying cars" 😂😂😂
July 7, 2025 at 9:08 PM
So we chained the doctor to a radiator and fed it moldy bread & water and for some reason it had trouble focusing on work. But I think we have to be fair to our toy right? What WE decide is fair, without peer review, and hidden from our vague, toothless white paper.
microsoft.ai/new/the-path...
July 7, 2025 at 5:12 PM
[Doctor Kiosk inside Carl's Jr restroom]: You have exceeded your $2000 limit for care on this illness please die now or deposit two McLife tokens to continue...
microsoft.ai/new/the-path...
July 7, 2025 at 4:59 PM
Not the main awful thing, but this encapsulates why SV is so easily enamored with LLMs. Nature/natural appear several times in different translations of the Old testament (and New). But when you regularly hallucinate facts to protect your ego, then spicy autocomplete seems pretty anthropomorphic.
what the fuck
June 29, 2025 at 5:07 PM
Seems pretty clear what the actual reason is. Their business model is the opposite of Sam Altman's statement on serving up ads (don't worry, he will get there too eventually). They, can't have their model then tell you 'here's a bunch of ads related to your answer instead of what you asked for.'
Gemini 2.5 Pro now hides its AI reasoning, frustrating developers who relied on this transparency for debugging and building complex applications. Google cites user experience but is considering restoring access for developers, while some question the true value of these "reasoning traces."
#MLSky
Google’s Gemini transparency cut leaves enterprise developers ‘debugging blind’
Why is Google hiding Gemini's reasoning traces? The decision sparks a debate over black-box models versus the need for transparency.
venturebeat.com
June 20, 2025 at 4:26 PM
Reposted by Dangerous_Chipmunk
"Our calculations suggest that the proposed budgetary cuts to the NIH will create a social cost that is 16 times greater than the savings that the administration is attempting to achieve."
jamanetwork.com/journals/jam...
Cutting the NIH—The $8 Trillion Health Care Catastrophe
This JAMA Forum discusses the recent budget cuts to National Institutes of Health (NIH), the effects of these cuts on scientific research and health of individuals in the US, and the prospects for cha...
jamanetwork.com
May 30, 2025 at 7:10 PM
Reposted by Dangerous_Chipmunk
@annleckie.com said something astute and said it clearly. Casey replied with magical thinking and the thread behind this newsletter post is getting tagged in the replies, so I thought I'd share it too: buttondown.com/maiht3k/arch...
May 1, 2025 at 11:25 PM
My favorite alignment problem:
March 3, 2025 at 11:07 PM
A commonality among the scenarios is genLLM wasn't tasked with actual problem solving, just ideating potential solutions, then judged by popular vote. I suppose benchmarking on proxy variables shouldn't be surprising in an industry of black boxes.
"The competitions do not actually ask machines to perform human tasks; it’s more accurate to say that they ask humans to behave in machine-like ways as they perform lifeless simulacra of human tasks." 🔥
My new piece in @theguardian.com

Techno-optimism is human pessimism.

www.theguardian.com/commentisfre...
March 1, 2025 at 11:23 PM
Reposted by Dangerous_Chipmunk
I talked to NIH officials, current and former, about what's been happening inside the agency since the Trump administration shut down their grantmaking pipeline in January. Their stories showed just how willing our new leaders are to break the law: www.theatlantic.com/health/archi...
Inside the Collapse at NIH
Administration officials pressured NIH to avoid clear advice from the agency’s own lawyers to restart grant funding now.
www.theatlantic.com
February 27, 2025 at 4:13 PM
Reposted by Dangerous_Chipmunk
There are so many reasons university endowments can't make up for the dramatic cuts the Trump administration has planned for science.

Here are a few. Thanks to @donmoyn.bsky.social for the platform.
No, University Endowments Can’t Replace Federal Science Funding
How Endowments Actually Work
donmoynihan.substack.com
February 24, 2025 at 3:16 PM
A major problem in grading LLMs is that peer review is qualitative. Doing stats on opinions of peer-review doesn't solve that. I remember a news piece where a room of people read identical horoscopes they thought were tailored to them. Was it accurate? Most said yes.🧪 www.nature.com/articles/d41...
January 2, 2025 at 9:12 PM
My friend just said "this comic is my entire career". I would like to disagree, but I have to go define differential LR scores for an arbitrary number of exp. groups with an arbitrary number of spatial slides with an arbitrary number of celltypes with arbitrary combinations of ligands & receptors...
November 14, 2024 at 9:02 PM
Wow. That's a bananas way to announce your candidacy for Most toxic PI on campus...
November 11, 2024 at 5:27 PM