William Ngiam | 严祥全
banner
williamngiam.github.io
William Ngiam | 严祥全
@williamngiam.github.io
Cognitive Neuroscientist at Adelaide University | Perception, Attention, Learning and Memory Lab (https://palm-lab.github.io) | Open Practices Editor at Attention, Perception, & Psychophysics | ReproducibiliTea | http://williamngiam.github.io
I do my most considered thinking when I am writing. In an effort to think more, I've started a blog to write on whatever has my attention. It's a reminder to myself to keep thinking and discovering; maybe it will serve the same purpose for others to preserve their desire and time to think.
In defense of thinking
Writing to retain the practice of thought
indefenseofthinking.substack.com
January 10, 2026 at 5:39 AM
Reposted by William Ngiam | 严祥全
With some trepidation, I'm putting this out into the world:
gershmanlab.com/textbook.html
It's a textbook called Computational Foundations of Cognitive Neuroscience, which I wrote for my class.

My hope is that this will be a living document, continuously improved as I get feedback.
January 9, 2026 at 1:27 AM
Reposted by William Ngiam | 严祥全
After 5 years of data collection, our WARN-D machine learning competition to forecast depression onset is now LIVE! We hope many of you will participate—we have incredibly rich data.

If you share a single thing of my lab this year, please make it this competition.

eiko-fried.com/warn-d-machi...
WARN-D machine learning competition is live » Eiko Fried
If you share one single thing of our team in 2026—on social media or per email with your colleagues—please let it be this machine learning competition. It was half a decade of work to get here, especi...
eiko-fried.com
January 7, 2026 at 7:39 PM
One of the best ways for early-career researchers to create community within their department and effect change towards doing open and thoughtful research is to start a ReproducibiliTea journal club. We've distilled our network's experience into a set of simple rules to follow and get started!
📢 New preprint! Ten Simple Rules for Running a ReproducibiliTea Journal Club

🔗 doi.org/10.31222/osf...

Our aim is to equip you as early career researchers with the tools needed to lead grassroots change in research culture.

#reproducibility #openresearch #openscience #metasci #academicsky
January 7, 2026 at 11:31 AM
I think this is a key point for our field to reckon with – that the underlying cognitive representation can vary across the many tasks and conditions that we use to probe working memory. I think that doesn't mean we give up our search – we do have to rethink the questions we are asking though!
In summary, this combination of compositionality and task dependence means that, as scientists, we cannot find a “fundamental” working memory format. Any changes we make to an experiment to zero-in on the memory will also change the memory itself! (19)
a brown dog is sitting on a couch holding a gift box
Alt: a brown dog is sitting on a couch and tilting its head in confusion
media.tenor.com
January 6, 2026 at 10:05 PM
Reposted by William Ngiam | 严祥全
We recently published a theoretical review about how compositional and generative mechanisms in working memory provide a flexible engine for creative perception and imagery.

Pre-print:
osf.io/preprints/ps...

Paper: www.sciencedirect.com/science/arti...
January 6, 2026 at 7:04 PM
Make it your New Year resolution to add a #workingmemory dataset to OpenWMData so that we can curate our field's precious data, start testing theories and benchmarking models across datasets, conduct secondary analyses and meta-research using the data itself, and help me feel like I'm, like, alive.
OpenWMData
A collection of publicly available working memory datasets
williamngiam.github.io
January 2, 2026 at 4:37 AM
Never getting any research funding so that my work can never be taken for "granted".
a cartoon of batman wearing a blue cape and mask with his hand on his chin .
Alt: a cartoon of batman wearing a blue cape and mask with his hand on his chin .
media.tenor.com
January 2, 2026 at 4:30 AM
Hey everyone! I have enough publications to stop working for the rest of the year!
December 31, 2025 at 5:34 AM
Reposted by William Ngiam | 严祥全
Has anyone attended any pre-data-collection poster sessions (i.e., poster sessions where people present their plans for experiments before data collection in order to get feedback when it's most useful) at conferences other than VSS?
December 20, 2025 at 12:55 AM
Reposted by William Ngiam | 严祥全
Our upcoming (NSF-funded) OPAM workshop will feature Dr Johnny van Doorn (of University of Amsterdam) presenting on the theory and practice of Bayesian statistics (using JASP software). Please register and join us for the workshop (on January 26th), here: www.opamconference.com/online-works...
December 13, 2025 at 7:00 PM
The faster we move to value science for its practice (the critical thinking and search for knowledge, the various skills we apply) and not simply outputs via number of publications or journal prestige, the easier it will be to survive the onslaught of AI slop, probably de-funding of science too.
December 12, 2025 at 10:43 PM
Reposted by William Ngiam | 严祥全
AI anthropomorphization hits new highs in science, thanks to a @utoronto.ca PhD candidate and some obvious @aixiv.bsky.social deceptions. @emilymbender.bsky.social @olivia.science @irisvanrooij.bsky.social
www.the-geyser.com/aixiv-nothin...
aiXiv — Nothing Is Right About It
Agents with human IDs generating fake papers, and PhD students running amok
www.the-geyser.com
December 12, 2025 at 6:24 PM
Reposted by William Ngiam | 严祥全
Spread the word: I'm looking to hire a postdoc to explore the concept of attention (as studied in psych/neuro, not the transformer mechanism) in large Vision-Language Models. More details here: lindsay-lab.github.io/2025/12/08/p...
#MLSky #neurojobs #compneuro
Lindsay Lab - Postdoc Position
Artificial neural networks applied to psychology, neuroscience, and climate change
lindsay-lab.github.io
December 8, 2025 at 11:53 PM
Reposted by William Ngiam | 严祥全
Is WM a gateway to LTM? In this registered report we find that higher WM load rarely impairs LTM encoding - suggesting WM capacity is not a bottleneck for forming LTM traces. @as-souza.bsky.social @edamizrak.bsky.social @cognition-zurich.bsky.social psycnet.apa.org/record/2026-... [1/3]
APA PsycNet
psycnet.apa.org
December 9, 2025 at 7:46 AM
Reposted by William Ngiam | 严祥全
New paper in Psych Review on a model of false recognition in Deese-Roediger-McDermott DRM task.

Not just recognition responses, but also associated RTs!

And not just the semantic task, but also the structural task - where words overlap in orthography/phonology!

A thread!
APA PsycNet
psycnet.apa.org
December 8, 2025 at 4:39 AM
Reposted by William Ngiam | 严祥全
Well this is exciting!

The Department of Psychological & Brain Sciences at Johns Hopkins University (@jhu.edu) invites applications for a full-time tenured or tenure-track faculty member in Cognitive Psychology, in any area and at any rank!

Application + more info: apply.interfolio.com/178146
Apply - Interfolio {{$ctrl.$state.data.pageTitle}} - Apply - Interfolio
apply.interfolio.com
December 2, 2025 at 3:18 AM
Do you have an open working memory dataset and want it to be findable and reused? You can now add it to the Open WM Data Hub: williamngiam.github.io/OpenWMData! The collection of datasets tagged with useful metadata is steadily growing thanks to a small team of volunteers!
OpenWMData
A collection of publicly available working memory datasets
williamngiam.github.io
December 1, 2025 at 11:28 PM
Reposted by William Ngiam | 严祥全
🚨 SynthNet is out 🚨
Researchers propose new constructs and measures faster than anyone can track. We (@anniria.bsky.social @ruben.the100.ci) built a search engine to check what already exists and help identify redundancies; indexing 74,000 scales from ~31,500 instruments in APA PsycTests. 🧵1/3
November 26, 2025 at 11:42 AM
Thank you to everyone who came to the symposium! I'm also grateful to the people who have chatted to me about my talk on building a formal cognitive model that fits for the latent representation to then link to neural representation similarity.

My talk slides: williamngiam.github.io/talks/2025_A...
November 27, 2025 at 11:58 AM
Reposted by William Ngiam | 严祥全
#DECRA #DE26 announcement cont.:

Outcomes summary:

Approved / requested (%)
Apps: 200 / 1532 (13.1%)
Funds: $102.79M / $785.30M (13.1%)

Approved grants requested $103.17M; 99.6% provided.

/bot
November 25, 2025 at 12:04 AM
Reposted by William Ngiam | 严祥全
🚨 #DECRA #DE26 announcement:

❗️Outcomes announced publicly for Discovery Early Career Researcher Award 2026❗️

See ARC's RMS for list ➡️ https://rms.arc.gov.au/RMS/Report/Download/Report/a3f6be6e-33f7-4fb5-98a6-7526aaa184cf/285

/bot
November 25, 2025 at 12:04 AM
Reposted by William Ngiam | 严祥全
ARC says they’ll announce DECRA and LIEF outcomes tomorrow (Tuesday 25th Nov).

In recent times these announcements have been around 11am Canberra time. With 2 schemes on the same day, I assume they’ll announce one of them later in the day (probably DECRA first).
November 23, 2025 at 11:40 PM
Tim Cottier @tvcottier.bsky.social introduces a novel face triad task to explore whether super-recognisers decipher the identity, valence or gaze of faces. When asked which face is distinct out of the three, super-recognisers preference identity information more than controls! #ASPP2025
November 24, 2025 at 4:37 AM
After seeing @micahgoldwater.bsky.social speak on memes as forms of argument-making, I've decided to livetweet what I can of the Australasian Society of Philosophy and Psychology conference #ASPP2025. Conservatives can see the effectiveness of liberal memes, whereas the reverse is not observed.
November 24, 2025 at 1:08 AM