Qihong (Q) Lu
@qlu.bsky.social
1.4K followers 650 following 62 posts
Computational models of episodic memory Postdoc with Daphna Shohamy & Stefano Fusi @ Columbia PhD with Ken Norman & Uri Hasson @ Princeton https://qihongl.github.io/
Posts Media Videos Starter Packs
Pinned
qlu.bsky.social
I’m thrilled to announce that I will start as a presidential assistant professor in Neuroscience at the City U of Hong Kong in Jan 2026!
I have RA, PhD, and postdoc positions available! Come work with me on neural network models + experiments on human memory!
RT appreciated!
(1/5)
Reposted by Qihong (Q) Lu
kristorpjensen.bsky.social
I’m super excited to finally put my recent work with @behrenstimb.bsky.social on bioRxiv, where we develop a new mechanistic theory of how PFC structures adaptive behaviour using attractor dynamics in space and time!

www.biorxiv.org/content/10.1...
Reposted by Qihong (Q) Lu
lampinen.bsky.social
Why does AI sometimes fail to generalize, and what might help? In a new paper (arxiv.org/abs/2509.16189), we highlight the latent learning gap — which unifies findings from language modeling to agent navigation — and suggest that episodic memory complements parametric learning to bridge it. Thread:
Latent learning: episodic memory complements parametric learning by enabling flexible reuse of experiences
When do machine learning systems fail to generalize, and what mechanisms could improve their generalization? Here, we draw inspiration from cognitive science to argue that one weakness of machine lear...
arxiv.org
Reposted by Qihong (Q) Lu
joachimbaumann.bsky.social
🚨 New paper alert 🚨 Using LLMs as data annotators, you can produce any scientific result you want. We call this **LLM Hacking**.

Paper: arxiv.org/pdf/2509.08825
We present our new preprint titled "Large Language Model Hacking: Quantifying the Hidden Risks of Using LLMs for Text Annotation".
We quantify LLM hacking risk through systematic replication of 37 diverse computational social science annotation tasks.
For these tasks, we use a combined set of 2,361 realistic hypotheses that researchers might test using these annotations.
Then, we collect 13 million LLM annotations across plausible LLM configurations.
These annotations feed into 1.4 million regressions testing the hypotheses. 
For a hypothesis with no true effect (ground truth $p > 0.05$), different LLM configurations yield conflicting conclusions.
Checkmarks indicate correct statistical conclusions matching ground truth; crosses indicate LLM hacking -- incorrect conclusions due to annotation errors.
Across all experiments, LLM hacking occurs in 31-50\% of cases even with highly capable models.
Since minor configuration changes can flip scientific conclusions, from correct to incorrect, LLM hacking can be exploited to present anything as statistically significant.
Reposted by Qihong (Q) Lu
brendenlake.bsky.social
Our new lab for Human & Machine Intelligence is officially open at Princeton University!

Consider applying for a PhD or Postdoc position, either through Computer Science or Psychology. You can register interest on our new website lake-lab.github.io (1/2)
qlu.bsky.social
Key-value memory network can learn to represent event memories by their causal relations to support event cognition!
Congrats to @hayoungsong.bsky.social on this exciting paper! So fun to be involved!
Reposted by Qihong (Q) Lu
Our new study (Titled: Memory Loves Company) asks whether working memory hold more when objects belong together.

And yes, when everyday objects are paired meaningfully (Bow-Arrow), people remember them better than when they’re unrelated (Glass-Arrow). (mini thread)
Reposted by Qihong (Q) Lu
xinchiyu.bsky.social
Now out in print at @jephpp.bsky.social ! doi.org/10.1037/xhp0...

Yu, X., Thakurdesai, S. P., & Xie, W. (2025). Associating everything with everything else, all at once: Semantic associations facilitate visual working memory formation for real-world objects. JEP:HPP.
Reposted by Qihong (Q) Lu
woodforbrains.bsky.social
Cortico-hippocampal interactions underlie schema-supported memory encoding in older adults

New paper led by @shenyanghuang.bsky.social!
academic.oup.com/cercor/artic...

Older adults' memory benefits from richer semantic contexts. We found connectivity patterns supporting this semantic scaffolding.
Reposted by Qihong (Q) Lu
mariamaly.bsky.social
Successful prediction of the future enhances encoding of the present.

I am so delighted that this work found a wonderful home at Open Mind. The peer review journey was a rollercoaster but it *greatly* improved the paper.

direct.mit.edu/opmi/article...
Reposted by Qihong (Q) Lu
qlu.bsky.social
Congrats again! Cody!!
qlu.bsky.social
Take a look if you are interested in the differences between LLM memory-augmentation vs human episodic memory!
And let us know if you have any feedback!
codydong.bsky.social
My first, first author paper, comparing the properties of memory-augmented large language models and human episodic memory, out in @cp-trendscognsci.bsky.social!

authors.elsevier.com/a/1lV174sIRv...

Here’s a quick 🧵(1/n)
authors.elsevier.com
Reposted by Qihong (Q) Lu
hritz.bsky.social
We put out this preprint a couple months ago, but I really wanted to replicate our findings before we went to publication.

At first, what we found was very confusing!

But when we dug in, it revealed a fascinating neural strategy for how we switch between tasks

doi.org/10.1101/2024.09.29.615736

🧵
Reposted by Qihong (Q) Lu
oxpop.bsky.social
@nichols.bsky.social collaborated with researchers at the National University of Singapore on a recent study published in @nature.com on how longer duration fMRI brain scans reduce costs and improve prediction accuracy for AI models. Read more about the study below 👇
bttyeo.bsky.social
1/11 Excited to share our @Naturestudy led by @leonooi.bsky.social @csabaorban.bsky.social @shaoshiz.bsky.social

AI performance is known to scale with logarithm of sample size (Kaplan 2020), but in many domains, sample size can be # participants or # measurements...

doi.org/10.1038/s415...
Reposted by Qihong (Q) Lu
s-michelmann.bsky.social
Fantastic work by our (now former) lab manager Liv Christiano. We assess the test-retest reliability of OPM and compare it to fMRI and iEEG. 🧠📄🧵
oliviachristiano.bsky.social
How reliable is OPM-MEG, and how does it compare to other neuroimaging modalities? 🤔

In a new preprint with ‪@s-michelmann.bsky.social‬, we evaluate the reliability of OPM-MEG within & between individuals, and compare it to fMRI & iEEG during repeated movie viewing. 🧠

📄 doi.org/10.1101/2025...
Reliability and signal comparison of OPM-MEG, fMRI & iEEG in a repeated movie viewing paradigm
Optically pumped magnetometers (OPMs) offer a promising advancement in noninvasive neuroimaging via magnetoencephalography (MEG), but establishing their reliability and comparability to existing metho...
doi.org
Reposted by Qihong (Q) Lu
neurozz.bsky.social
Excited to share a new preprint w/ @annaschapiro.bsky.social! Why are there gradients of plasticity and sparsity along the neocortex–hippocampus hierarchy? We show that brain-like organization of these properties emerges in ANNs that meta-learn layer-wise plasticity and sparsity. bit.ly/4kB1yg5
A gradient of complementary learning systems emerges through meta-learning
Long-term learning and memory in the primate brain rely on a series of hierarchically organized subsystems extending from early sensory neocortical areas to the hippocampus. The components differ in t...
bit.ly