Patrick Cooper
@neurocoops.bsky.social
290 followers 410 following 31 posts
Cognitive neuroscientist at CSIRO Current: Human-AI collaboration 🤖🤝😀 Previous: cognitive control and theta oscillations, non-instrumental information, curiosity, EEG Other: Mind controlled video games 🧠👾🕹 He/Him
Posts Media Videos Starter Packs
neurocoops.bsky.social
🧪🤖Our latest paper: Towards a Criteria-Based Approach to Selecting Human-AI Interaction Mode is now out.

We made a handy rubric to help decisions around adding AI to your work. AI isn’t always the solution but if it is, thinking about how your workflow changes is vital!

dx.doi.org/10.1002/hfm....
Towards a Criteria‐Based Approach to Selecting Human‐AI Interaction Mode
Artificial intelligence (AI) tools are now prevalent in many knowledge work industries. As AI becomes more capable and interactive, there is a growing need for guidance on how to employ AI most effec...
dx.doi.org
neurocoops.bsky.social
I completed my first game jam on Sunday. Two weeks to make a game from scratch around the theme of “replicate.”

If you have s&box (or want to download it) you can check it out: sbox.game/veggiepatty/...
neurocoops.bsky.social
Good luck to all the DECRA applicants, hoping you get some helpful comments today!
neurocoops.bsky.social
Gentle error messages have nothing on R
Screenshot of an error message in RStudio that says "R Session Aborted. R encountered a fatal error. The session was terminated." with an icon of a bomb"
neurocoops.bsky.social
This is such a great package!
benediktehinger.bsky.social
🧵 1/5 Excited to share our latest @joss-openjournals.bsky.social 🧪 #paper:

UnfoldSim.jl

New @julialang.org package to simulate continuous event-based time series for #EEG & beyond!

📜 doi.org/10.21105/jos...
🛠 github.com/unfoldtoolbo...

With @judithschepers.bsky.social, Luis Lips & Maanik Marathe
Reposted by Patrick Cooper
lucycheke.bsky.social
I am I'm this picture and I don't like it
neurocoops.bsky.social
Instead, how much someone trusted the AI greatly influenced the agreement with AI advice. We explored how this developed by modelling how trust evolved over time. We found expectations of the AI’s accuracy changed over time and the violations of these expectations predicted trust in AI.
neurocoops.bsky.social
Participants performed a deepfake detection task with advice from AI presented at different stages of the decision making process. We found the timing of advice had minimal impact on the rate of agreement with the AI or the accuracy of their classifications.
neurocoops.bsky.social
We’ve just had our work on assessing how trust in AI support develops accepted as late-breaking work at CHI25. DOI is still to go live but you can check it out here in the meantime:
camps.aptaracorp.com/ACM_PMS/PMS/...
(I’ll update the thread when the DOI is live).
Trust in AI is dynamically updated based on users' expectations
DOI: https://doi.org/10.1145/3706599.3719870 CHI EA '25: Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, April 2025
camps.aptaracorp.com
Reposted by Patrick Cooper
drquekles.bsky.social
🧠 💡 PhD scholarships available! 💡🧠

The CogNeuro group at @marcsinstitute.bsky.social is on the hunt for talented students to work on face and object perception using EEG and neural decoding. Learn more here 👇 and get in touch!

Project 1: shorturl.at/0NwQl
Project 2: shorturl.at/yTdqG
Reposted by Patrick Cooper
anianruoss.bsky.social
Ever wonder how well frontier models (Claude 3.5 Sonnet, Gemini 1.5 Flash & Pro, GPT-4o, o1-mini & o1-preview) play Atari, chess, or tic-tac-toe?

We present LMAct, an in-context imitation learning benchmark with long multimodal demonstrations (arxiv.org/abs/2412.01441).

🧵 1/N
neurocoops.bsky.social
I’d like to be added please.
Reposted by Patrick Cooper
aleximas.bsky.social
Why do I think these instances are interesting? To me, these are not just random instances that LLMs are sometimes wrong (like people), these are *diagnostic* that LLMs do not have a world model and are not “reasoning”, it exposes the basic architecture. Why? 1/n
tomerullman.bsky.social
thinking of calling this "The Illusion Illusion"

(more examples below)
Reposted by Patrick Cooper
rborza.bsky.social
🚨 If you haven’t seen it yet…

📢 The NIH BioArt Source provides a library of FREE professionally designed illustrations and icons, available for anyone to use. They can be Downloaded in High Definition.

Check it out at bioart.niaid.nih.gov
Reposted by Patrick Cooper
jaanaru.bsky.social
The craziest paper I have ever done is this thought experiment with Albert Gidon and Matt Larkum.

In the first journal, reviewer 1 recommended that we should not try to publish this; reviewer 2 called it "wacky". Thanks for the motivation: A sequel is coming up!

journals.plos.org/plosbiology/...
Does brain activity cause consciousness? A thought experiment
The authors of this Essay examine whether action potentials cause consciousness in a three-step thought experiment that assumes technology is advanced enough to fully manipulate our brains.
journals.plos.org
Reposted by Patrick Cooper
Reposted by Patrick Cooper
emollick.bsky.social
Anthropic posts something a lot of social scientists studying AI have been thinking - you need to apply basic statistical methods to AI evaluations!

A solid attempt to lay out how to do better tests with basic methodology. https://buff.ly/3OijwX0
neurocoops.bsky.social
I’d love to be added 🙏
neurocoops.bsky.social
Congratulations Dr Jarvis! 🎉🎉
neurocoops.bsky.social
Our latest work, up as a preprint to enjoy.

We gave LLMs tasks adapted from experimental psychology to see how they contribute to teamwork.

We found LLMs were reasonable at monitoring tasks but were poor at tasks requiring planning and strategising.
psyarxivbot.bsky.social
How well do Large Language Models perform as team members? Testing teamwork capabilities of LLMs: http://osf.io/qyfrw/
neurocoops.bsky.social
Looking up some cognitive task analysis figures this morning and kept finding those slides with the blue background and yellow text.

Took me down a rabbit hole about why these things were everywhere. Someone else had the same idea.

en.rattibha.com/thread/12942...