Marek
marekmcgann.bsky.social
Marek
@marekmcgann.bsky.social
Cognitive scientist. Teacher. Nerd.

Cognitive science of the enactive, ecological, and (redundantly) embodied sort. Also, some stuff on scientific practice in psychology.


I co-convene these: https://www.ensoseminars.com

(he/him)
Wouldn't be surprised if that's playing a part, though generally the GenAI is more symptom than cause of the kind of thing he's talking about I think.
November 29, 2025 at 12:18 PM
I can't say I'm seeing this in huge numbers. It's definitely the case (and Nick Sousanis says this down-thread), that there's a higher number who just aren't able to engage, but it's a widening gap in achievement rather than a pervasive thing.

There's a messy causality, I think.
November 29, 2025 at 11:41 AM
Reposted by Marek
If you look quickly at the start of the video of the scroll you can see a similar drawing as the popup card… bsky.app/profile/nsou...
Because I now can use video here: my 15 ft long continuous sequence (also divisible in 22 pgs) that opens #Nostos! Took about a year to make, longer to find a printer. I’d love to see it printed accordion-style - but we’ll see as the rest of the book gets closer to publication! #Unflattening 2
November 25, 2025 at 6:57 AM
Reposted by Marek
3) this may be particularly relevant when we're thinking about digital self control and the attention economy and the intention economy
dl.acm.org/doi/10.1145/...

Musk is unambiguously describing an escalation of the problems we identify in these last two papers.
Autonomous Regulation of Social Media Use: Implications for Self-control, Well-Being, and UX | Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems
dl.acm.org
November 24, 2025 at 1:52 PM
Reposted by Marek
2) we have operational theories that address scales of user motivation but we (and sometimes the psychologists) neglect the implications of those different scales
dl.acm.org/doi/full/10....
November 24, 2025 at 1:52 PM
Reposted by Marek
1. even well-meaning tech folk fail to differentiate aspects of agency and autonomy. Particularly we ignore different scales: what aligns with my immediate interests, what aligns with longer term interests, what enables me to engage and adapt flexibly over time

dl.acm.org/doi/10.1145/...
How does HCI Understand Human Agency and Autonomy? | Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems
dl.acm.org
November 24, 2025 at 1:52 PM
Reposted by Marek
🏆 Individual: @simine.com, psychologist at @unimelb.bsky.social & editor-in-chief of Psychological Science, is recognized for pioneering methodological rigor, reproducibility & collaborative research, driving initiatives such as @improvingpsych.org & the journal Collabra @ucpress.bsky.social. (2/5)
November 24, 2025 at 10:00 AM
Reposted by Marek
> models give unsafe responses because that is not what they are designed to avoid. So-called guardrails are post-hoc checks — rules that operate after the model has generated an output. If a response isn't caught by these rules, it will slip through

www.forbes.com/sites/weskil...

2/n
November 17, 2025 at 5:56 AM
Reposted by Marek
as if it's the speed of innovation that's been holding us back and not regressive politicians, self-absorbed greedy billionaires, or collapse of information ecosystems
November 22, 2025 at 7:10 PM