Brad Wyble
banner
Brad Wyble
@bwyble.bsky.social
Academic, cognitive & vision scientist, computational modeller, cofounder @neuromatch Academy, He/His. This is a personal account.
In the 1940's would could buy a ring for 15 cents that had radioactive material and if you looked inside you could see the particles hitting a little screen.

*Stares sadly at the latest Happy Meal toy*

(btw one of these is for sale right now on ebay)
January 8, 2026 at 4:15 AM
such as when the top-two rows are in the first array and the bottom-two rows in the second. More complex patterns, like the one in the image here, had very little improvement over time. (the data shows the average of the simple/complex patterns) (16)
January 6, 2026 at 7:25 PM
However this flexibility is difficult for cognitive scientists because memory representations are highly contextual. It means that we might represent an array of colored squares as a set of separate objects, or instead as a single holistic representation. (12)
January 6, 2026 at 7:20 PM
Or if we view the same scene with the goal of remembering the steering wheel, we might create a memory more that emphasizes the visual detail of wheel specifically, maybe including the Mercedes logo. (11)
January 6, 2026 at 7:19 PM
We also assume that memory for one scene or object can be composed of several different *kinds* of representation. For example when we perceive this car dashboard we might build a memory containing a coarse-grained representation of the whole as well as memories of the familiar objects within. (10)
January 6, 2026 at 7:18 PM
For example when asked to visually imagine a D, rotating it, and adding 4 on top, subjects can report that the resulting shape is a sailboat (Finke et al. 1989). Memory stores ‘4’ and ‘D’, and then generative mechanisms reconstruct these as visual reconstructions into the perceptual hierarchy. (7)
January 6, 2026 at 7:15 PM
We assume also that generative processing allows conceptual information to be reconstructed in a format akin to visual sensory representations that can be manipulated and re-processed by perceptual mechanisms. (4)
January 6, 2026 at 7:11 PM
We assume that compositionality allows complex scenes or objects to be mentally decomposed into constituents that can be individually manipulated or recombined to form new forms. (3)
January 6, 2026 at 7:07 PM
We recently published a theoretical review about how compositional and generative mechanisms in working memory provide a flexible engine for creative perception and imagery.

Pre-print:
osf.io/preprints/ps...

Paper: www.sciencedirect.com/science/arti...
January 6, 2026 at 7:04 PM
Journals discover that they can nag you about a review being late before it's actually late.
December 29, 2025 at 4:59 AM
What I expected, of course, is that memory for these passages would improve systematically from 0th order to full text, but that's not quite what happened:
November 13, 2025 at 5:22 PM
Examples of complete gibberish (which they call a 0th order approximation to english):
November 13, 2025 at 5:15 PM
Examples of coherent prose of varying length:
November 13, 2025 at 5:13 PM
I changed my profile picture, which they are welcome to use in perpetuity, worldwide, in any manner.
September 17, 2025 at 9:34 PM
Prince of Persia was a groundbreaking game for its accurate character movement back in the late 80's. To do this, the creator videotaped his own brother running and jumping on the sidewalk and then did pixel mapping frame by frame as shown here at the Strong Museum of Play
September 7, 2025 at 11:31 PM
In our two experiments, we found, then replicated, an accuracy boost for displays containing related pairs in visual working memory. These results suggest that our functional understanding of object interactions facilitates working memory compression. >
August 28, 2025 at 12:12 PM
Our new study (Titled: Memory Loves Company) asks whether working memory hold more when objects belong together.

And yes, when everyday objects are paired meaningfully (Bow-Arrow), people remember them better than when they’re unrelated (Glass-Arrow). (mini thread)
August 28, 2025 at 12:07 PM
July 22, 2025 at 3:23 PM
Look at this interesting google trend graph for "attention". Those cycles peak in October and March of each year like clockwork. My guess is that these are Psych perception & cognition classes. Are there other explanations? @sbmost.bsky.social @bjbalas.bsky.social @benwolfevision.bsky.social
July 7, 2025 at 6:31 PM
June 17, 2025 at 9:11 PM
Drawers? Why not a whole dresser?
May 22, 2025 at 10:05 PM
But it's just not doing a great job, still. And yet we're being forced to integrate into our jobs. Being upset seems completely reasonable.
May 2, 2025 at 4:55 PM
New Preprint! Interested in learning about how working memory is subserved by both compositional and generative mechanisms? Read on!
April 14, 2025 at 2:24 AM
I asked a friend with access to the new GPT image model to create a poster aggressively attacking my work on the attentional blink and it created this masterpiece. Perhaps we finally found a practical use for these models.
March 27, 2025 at 2:26 PM
As a test of google, I searched for the attentional blink, which I know a lot about:

So much of this is wrong. It's not an inability to recognize, it's not that the T2 is "too soon" ( lag 1 sparing eliminates this).

Just really sad that simplistic and wrong takes are now presented as fact.
January 17, 2025 at 7:22 PM