moin syed
syeducation.bsky.social
moin syed
@syeducation.bsky.social
Professor of Psychology, University of Minnesota. Sporadically writing stuff at http://getsyeducated.substack.com
Reposted by moin syed
In 2014 I introduced a replication project in my grad research methods class. I taught this version of the class 4 times (no longer teach it). Some tallies: 9 published replication papers; 30 grad student authors; 19 *open* data sets; materials, syntax, etc also open (all on OSF). Check them out 👇
November 24, 2025 at 11:34 PM
Reposted by moin syed
I have no expertise to evaluate the problems of this particular paper, but I think it’s *very very important* that we don’t present “the reviewer recommended rejection” as evidence of scientific or editorial misconduct.
November 22, 2025 at 12:55 PM
Reposted by moin syed
The real issue, IMO, is neither education nor tools. It’s that the numbers don’t matter. We quantify reality with numbers that don’t matter. It’s hard to care about 7 ooglebliks plus or minus 2 ooglebliks if nobody gives a fuck what an oogleblik is
This is an education problem, not a tool problem; and we don't want people simply moving from thinking p-values are magic to thinking confidence intervals are.
Next: Geoff Cumming @thenewstats.bsky.social with 'Statistical significance and p values: The researcher’s heroin'
* p values are highly unrealiable - don't trust them, don't use them!
www.thenewstatistics.com
tiny.cc/osfsigroulette
#IRICSydney
November 18, 2025 at 8:14 PM
the internet is working just fine, thanks.
November 18, 2025 at 2:44 PM
Reposted by moin syed
Issue 22 of RDM Weekly is out! 📬

- FAIR Data Cheatsheet @w-u-r.bsky.social
- Open Research: Examples of Good Practice, and Resources Across Disciplines @ukrepro.bsky.social
- 3 Myths About Open Science That Just Won’t Die @syeducation.bsky.social
and more!

rdmweekly.substack.com/p/rdm-weekly...
RDM Weekly - Issue 022
A weekly roundup of Research Data Management resources.
rdmweekly.substack.com
November 18, 2025 at 2:22 PM
Reposted by moin syed
I thought about this point a bit more and I think Moin is onto something 😅 take randomized experiments, which are a huge hassle — yet social psych has determined that “this is what science looks like”, and so they be randomizing (for better or worse), no matter what.
November 14, 2025 at 5:19 AM
Reposted by moin syed
Psychology wants to stay WEIRD, not go WILD: https://osf.io/bxk6c
November 12, 2025 at 10:37 PM
A quick (1000 words) read to enjoy with your morning coffee or afternoon tea:

"Psychology wants to stay WEIRD, not go WILD"

Why hasn't psychology diversified it samples, methods, theories, etc.? Because it doesn't want to. osf.io/preprints/ps...
November 13, 2025 at 2:59 PM
Reposted by moin syed
Shout out to @psychmag.bsky.social for being speedy, open access, and helping to share things I care about quickly and freely:

www.bps.org.uk/psychologist...
November 13, 2025 at 2:01 PM
Reposted by moin syed
What's the take home message?

If you're submitting AI slop you're a loser. You're just making these great free services harder to run, and making it more difficult to separate signal (science) from noise (your crappy AI shit.)
November 3, 2025 at 2:52 PM
Reposted by moin syed
There still seems to be a lot of confusion about significance testing in psych. No, p-values *don’t* become useless at large N. This flawed point also used to be framed as "too much power". But power isn't the problem – it's 1) unbalanced error rates and 2) the (lack of a) SESOI. 1/ >
But here's, the thing, p values and significance become useless at such large sample sizes. When you're dividing the coefficient by the SE and the sample size is in the tens of thousands, EVERYTHING IS SIGNIFICANT. All you're testing is whether the coefficient is different than zero.
October 31, 2025 at 8:13 AM
Reposted by moin syed
MetaROR, a platform for reviews of research on research, is a success! We have published 24 sets of reviews and have 16 submissions in process. MetaROR now has 9 partners - these are journals that agree to use our reviews when authors submit to them. metaror.org #metascience #openaccess
Home - MetaROR
MetaResearch Open Review MetaResearch Open Review MetaResearch Open Review A new platform designed to transform how we review and share metaresearch A new platform designed to transform
metaror.org
October 26, 2025 at 10:33 PM
Reposted by moin syed
We built the openESM database:
▶️60 openly available experience sampling datasets (16K+ participants, 740K+ obs.) in one place
▶️Harmonized (meta-)data, fully open-source software
▶️Filter & search all data, simply download via R/Python

Find out more:
🌐 openesmdata.org
📝 doi.org/10.31234/osf...
October 22, 2025 at 7:34 PM
Reposted by moin syed
Can AI simulations of human research participants advance cognitive science? In @cp-trendscognsci.bsky.social, @lmesseri.bsky.social & I analyze this vision. We show how “AI Surrogates” entrench practices that limit the generalizability of cognitive science while aspiring to do the opposite. 1/
AI Surrogates and illusions of generalizability in cognitive science
Recent advances in artificial intelligence (AI) have generated enthusiasm for using AI simulations of human research participants to generate new know…
www.sciencedirect.com
October 21, 2025 at 8:24 PM
Reposted by moin syed
Results of the replication are in!

Chocolate is more desirable than poop:

Cohen's d_rm = 6.20, 95%CI [5.63, 6.78]

N = 486, two single item 1-7 Likert scales of desirability.

w/
@jamiecummins.bsky.social
Make an effect size prediction!

@jamiecummins.bsky.social and I are replicating Balcetis & Dunning's (2010) "chocolate is more desirable than poop" (Cohen's d = 4.52)

Let us known in the replies what effect size you think we'll find. Details of the study in the thread below.
October 14, 2025 at 6:16 PM
Nice post from Matti on preprints and the nonsense of journals. This paragraph is outstanding.
October 14, 2025 at 5:40 PM
Reposted by moin syed
Against Publishing: universonline.nl/nieuws/2025/...

Preprints are read, shared, and cited, yet still dismissed as incomplete until blessed by a publisher. I argue that the true measure of scholarship lies in open exchange, not in the industry’s gatekeeping of what counts as published.
October 14, 2025 at 9:16 AM
Reposted by moin syed
Apropos of recent open science conversations - this paper is an awesome primer for grad students and faculty who want to learn more: online.ucpress.edu/collabra/art...
Easing Into Open Science: A Guide for Graduate Students and Their Advisors
This article provides a roadmap to assist graduate students and their advisors to engage in open science practices. We suggest eight open science practices that novice graduate students could begin ad...
online.ucpress.edu
October 3, 2025 at 11:00 AM
Reposted by moin syed
"The simplest recommendation that flows from my arguments in this article is that we need to enhance the degree of specificity and precision when making theoretical claims."

By @syeducation.bsky.social
Elegant theories and the problems of social psychology
This perspective article provides a commentary on the current state of theories used in social psychology, with a particular emphasis on the importance of distinguishing between theoretical models ...
doi.org
September 21, 2025 at 11:12 PM
Reposted by moin syed
Just learned about this diamond journal, which has apparently been running since 2015! Looks like a very nice place for any meta-sciency work & probably deserves some visibility.

septentrio.uit.no/index.php/no...
Nordic Perspectives on Open Science
Nordic-Baltic journal of Open Access to publications, data, peer review and open science.
septentrio.uit.no
September 20, 2025 at 12:31 PM
Reposted by moin syed
Can large language models stand in for human participants?
Many social scientists seem to think so, and are already using "silicon samples" in research.

One problem: depending on the analytic decisions made, you can basically get these samples to show any effect you want.

THREAD 🧵
The threat of analytic flexibility in using large language models to simulate human data: A call to attention
Social scientists are now using large language models to create "silicon samples" - synthetic datasets intended to stand in for human respondents, aimed at revolutionising human subjects research. How...
arxiv.org
September 18, 2025 at 7:56 AM
Reposted by moin syed
In a new paper, my colleagues and I set out to demonstrate how method biases can create spurious findings in relationship science, by using a seemingly meaningless scale (e.g., "My relationship has very good Saturn") to predict relationship outcomes. journals.sagepub.com/doi/10.1177/...
Pseudo Effects: How Method Biases Can Produce Spurious Findings About Close Relationships - Samantha Joel, John K. Sakaluk, James J. Kim, Devinder Khera, Helena Yuchen Qin, Sarah C. E. Stanton, 2025
Research on interpersonal relationships frequently relies on accurate self-reporting across various relationship facets (e.g., conflict, trust, appreciation). Y...
journals.sagepub.com
September 10, 2025 at 6:18 PM
This is a good post, tracking closely with my own (often unpopular) views on the subject. Worth a read for anyone with interest or experience in interdisciplinary work.
September 8, 2025 at 2:30 PM
Reread this one for the history of psych class I am teaching and was reminded of how good it is. Short, thoughtful, and provocative. A highly worthwhile read. doi.org/10.1037/gpr0...
September 5, 2025 at 1:59 PM
Reposted by moin syed
If this is turning into a battle, then the researchers’ possible counter is to normalize updating preprints and CVs with “accepted at journal x” as sufficient for getting publication credit and not pay the fees.
August 29, 2025 at 9:38 PM