Matti Vuorre
banner
matti.vuorre.com
Matti Vuorre
@matti.vuorre.com

I am an assistant professor at the department of Social Psychology at Tilburg University's School of Social and Behavioral Sciences.

I have a website at https://vuorre.com.

All posts are posts.

Psychology 44%
Neuroscience 13%
Pinned
I am hiring PhD candidates to study the psychology of attention & technology use at @tilburg-university.bsky.social.

We're looking for motivated & curious scholars with expertise in cognitive psychology and statistics, and offer a friendly work environment with great terms & benefits.

tiu.nu/22989
🚨 New draft 🚨

We built an LLM-enabled system to measure greenwashing scores in 1 million worldwide Facebook ads.

We found vast networks of Facebook pages sharing pro-fossil fuel messages & show that ads are targeted at left-leaning areas with fossil fuel investments.

Link: doi.org/10.48550/arX...

The MSFT angry bird 🫡

It's always sunny in philadelphia.

Microdosing hot peppers! We need more science on this asap.
two police officers shake hands in front of a sign that says fx
ALT: two police officers shake hands in front of a sign that says fx
media.tenor.com

For anyone who hasn't seen it yet, season 17 was very good. It brought back a lot of the elements from earlier seasons that made the show what it is (though not all the minor characters I'd have hoped for). I also appreciate the 8 episodes per season format--quality over quantity.

Incredible dataviz.
Time for another episode of “bad data viz”

This one truly has me here thinking “Am I a Dolt?” What is this even saying??? Doesn’t WSJ have amazing talented staff skilled in data visualization? Like.. what is this?

Reposted by Matti Vuorre

Time for another episode of “bad data viz”

This one truly has me here thinking “Am I a Dolt?” What is this even saying??? Doesn’t WSJ have amazing talented staff skilled in data visualization? Like.. what is this?

Reposted by Matti Vuorre

It's easy to produce spurious findings:
A meaningless score based on irrelevant evaluations (“My relationship has very good Saturn”) was moderately related to common relationship measures (satisfaction, commitment) & predicted those measures 3 weeks later

journals.sagepub.com/doi/10.1177/...

So 'working on files on my computer' <-> syncing with a cloud system (github et al.) -> archiving the files and directories (zenodo et al). It's not rocket science!

I agree. This might have something to do with supporting "modular" research projects / publishing, but at the very least the UI does not help. I support using something called a 'file system' where projects can have 'directories' and 'files', and those can be easily shared & collaborated on :)

This post highlights how far behind the OSF is in usability and speed. The ResearchBox UI does seem clean, but why not use Zenodo directly (RB archives to Zenodo anyway.)

Thanks!

Time to stop doing non-open reviews or at least complain to the journal? That sucks.

How strong is strong? Posteriors for 50% n=2,4,8,16 plz 🙏

Yeah makes sense. Also probably 'sensitive' behaviors affected differently / for longer time. What if I just never tell the participants 🥸

This episode reminded me of 2 things:

1. We know how & why the bootstrap works, but the fact that it does is just very cool.
2. Bayes. All that strapping (& which bias adjustment to choose) just goes away as unnecessary.
Join us tomorrow as we repeat ourselves.
Join us as we repeat ourselves tomorrow.
Tomorrow, we repeat ourselves. Join us!

Nice study! So the implication is to include a "warmup" period of ~5 days and use data only after that? Good to know.

It's the 🤷 prior

Yup the binomial posterior is the easiest example of prior = data. So here b(0.5, 0.5) is the same as observing previously a trial that was both a success and failure.

Unknowing users accidentally adding a whole extra trial to the dataset!

It's because all others data is 16/1000 but Bayes data is 16.5/1001 😉

The most comprehensive dataset of video game play and psychological functioning available with CC0 license at zenodo.org/records/1760...
We released a pretty cool dataset/preprint today looking at video game play, cognition, time-use and a ton of self-reported psych measures at osf.io/preprints/ps... with @nballou.bsky.social @matti.vuorre.com @thomashakman.bsky.social @rpsychologist.com and @shuhbillskee.bsky.social RRs coming soon
We released a pretty cool dataset/preprint today looking at video game play, cognition, time-use and a ton of self-reported psych measures at osf.io/preprints/ps... with @nballou.bsky.social @matti.vuorre.com @thomashakman.bsky.social @rpsychologist.com and @shuhbillskee.bsky.social RRs coming soon

~40% of psyarxiv preprints contain links to open data in 2025 vs (e.g.) ~10% in 2019 (although in the latter case people mostly did not report this metadata): vuorre.com/psyarxiv-das...

vuorre.com/psyarxiv-das... now shows tag co-occurrence networks for psyarxiv preprints. Useful for e.g. looking at what kinds of things people study in relation to social media (left) and technology (right).

They are laughing at US!

Yeah. It seems to me that granting agencies are making positive changes tho (suggesting/requiring preprints, diamond oa, etc.).

Reposted by Magnus Johansson

Everyone involved in scientific publishing should take a look at these papers.

>"The domination of scientific publishing in the Global North by major commercial publishers is harmful to science; we need [...] to re-communalise publishing to serve science not the market."
We wrote the Strain on scientific publishing to highlight the problems of time & trust. With a fantastic group of co-authors, we present The Drain of Scientific Publishing:

a 🧵 1/n

Drain: arxiv.org/abs/2511.04820
Strain: direct.mit.edu/qss/article/...
Oligopoly: direct.mit.edu/qss/article/...