Veli-Matti Karhulahti
mkarhulahti.bsky.social
Veli-Matti Karhulahti
@mkarhulahti.bsky.social

science, gaming, art (senior researcher at university of jyväskylä)

Psychology 27%
Sociology 20%

But who has the time for validity considerations when the deadlines are next month
this is one of my favourite observations about sample size calculations. (afaik first articulated by Miettinen in 1985)
this is one of my favourite observations about sample size calculations. (afaik first articulated by Miettinen in 1985)

Reminds how back in the days Riot publicly insisted that only 2% of their users express verbal toxicity-- turned out that to be classified "toxic" one had to be flagged by peers literally *hundreds* of times within a couple of days

generally, based on many calculations i've seen over the years, a huge difference between publishers seems to come from ad & promotion costs-- which are surely important when competing in the prestige/visibility game under attention economy

Yes but in this case preprint=article, running the journal round is optional (alas, most of us will keep doing it for a good while indeed -- then again, in-kind costs at PCI are difficult to compare to publishers who don't report donated time)

Got fed up with that a few years ago and life has been so much better since-- today, there are so many good diamond options that the only thing potentially keeping one in the broken system is severe evaluation pressure

Sometimes it feels science isn't that hard after all, all one needs to do is think hard for 5 seconds

I'm aware of pilots/plans where applicants have an option to submit their plan as stage 1 draft, this makes sense imo (raise awareness etc) but nuance is so important in such changes

This is such a key hermeneutic for older texts, there's always stuff going on behind the scenes, the thing that's being built on
Often they are also shadow boxing with the previous paradigm, that can remain unnamed, since "everyone knows it", even if from today's point of view it is completely forgotten. Context matters.

Do you know if there's an English version available or coming out?

meanwhile, hoping the funders' own publication portals and diamond venues solve the ACP issue (as some already do to some degree)

Reposted by Dorothy Bishop

I don't know anyone who has worked deeply with RRs (authors/editors/reviewers) who'd support something like this for so many reasons-- a friendly reminder that RRs are a tool for certain scenarios, definitely underused, and like any other tool, to be used wisely
If funders wanted to make a huge positive impact on scientific practice, they would mandate that all publications appear first as registered reports, that APCs are only paid for RRs, and that grant applications only require preliminary data for RR sample size determination / power analysis.
Often they are also shadow boxing with the previous paradigm, that can remain unnamed, since "everyone knows it", even if from today's point of view it is completely forgotten. Context matters.

We likewise found a steep increase in failed controls (0.1>2.3>12.8>22.7) the more severely ppl replied to a single-item tech problems question-- this could be intentional bad responding too but nonetheless critical to be aware of

Many EU/US ppl say they must keep submitting to these publishers bc it'd be unfair for co-author students not to-- fair enough sometimes, but if one is a student in a major uni and/or working for a known lab, they've already got a top 1% global advantage without prestige publishing, they'll be ok
We wrote the Strain on scientific publishing to highlight the problems of time & trust. With a fantastic group of co-authors, we present The Drain of Scientific Publishing:

a 🧵 1/n

Drain: arxiv.org/abs/2511.04820
Strain: direct.mit.edu/qss/article/...
Oligopoly: direct.mit.edu/qss/article/...
If funders wanted to make a huge positive impact on scientific practice, they would mandate that all publications appear first as registered reports, that APCs are only paid for RRs, and that grant applications only require preliminary data for RR sample size determination / power analysis.

i understand it's difficult to run large data collections like this (especially in HBSC) but that's exactly what's wrong today: brute force large datasets with whatever measures & then think later if any of the investment was worth anything
doi.org

funnily enough, the paper says the scale (IGDS) is one of the "better functioning measurements" and cites our paper-- in our paper we actually found only 1 of 9 symptoms measured in a content valid way 🫠 (also curious how translation across 12 languages took place)

it can be interesting to add multiple items even when they don't signal problems (especially if we take networks seriously) but the 2013 symptom list, sketched in dsm-5 appendix, is way outdated and never worked tbh

New WHO-collaborated (HBSC) study, n=44k, finds that boys who never play games have the same amount of gaming disorder symptoms vs those who play daily-- it's 2025, what are we doing? measurement?

a) avoid Finnish food
b) many nice museums, Villa Gyllenberg & Didrichsen worth a visit for the island space alone
c) best coffee: Päiväkahvibaari 1 (vallila)
d) library Oodi
e) saunas, Sompasauna 24/7 is classic (recently moved tho, not sure how good the new location is)
We wrote the Strain on scientific publishing to highlight the problems of time & trust. With a fantastic group of co-authors, we present The Drain of Scientific Publishing:

a 🧵 1/n

Drain: arxiv.org/abs/2511.04820
Strain: direct.mit.edu/qss/article/...
Oligopoly: direct.mit.edu/qss/article/...

Can they argue the ad is for single-player as long as multiplayer not mentioned? (assuming some purchases will be offered in single mode too tho)
📢 Register for the 5th Helsinki Initiative webinar (8 December) on Multilingualism in Scholarly Communication with presentations by @tatsuya-amano.bsky.social, @karenstroobants.bsky.social and Andre Brasil!

More information and registration: www.helsinki-initiative.org/en/events/5t...

One of my all-time fav rants on this topic-- especially love this figure demonstrating how expert clinicians fail to agree on major depression diagnosis most of the time (57%) based on DSM5 field trials

currently editors (handling tons of papers) must heavily trust reviewers as they cannot be experts in everything-- a move toward more distributed editorial labor (in exchange for less reviewing) expects more human scrutiny from topic-fit editors who'd then also manage less papers on average

The reason why this is interesting & maybe even promising is: it isn't simply "less human scrutiny" but but a shift from reviewer trust to editor trust--
In a future publishing system, qed + editor could certainly replace "reviewers+editor" somewhat. Editors could still call in expert reviewers when they feel it's needed.

But replacing "2 reviewers + 1 editor" with "1 reviewer + qed + 1 editor" would probably give similar results.

10/n
In a future publishing system, qed + editor could certainly replace "reviewers+editor" somewhat. Editors could still call in expert reviewers when they feel it's needed.

But replacing "2 reviewers + 1 editor" with "1 reviewer + qed + 1 editor" would probably give similar results.

10/n

attention to alternatives is good as it contributes to gradual, slow changes that over years (decades) can lead to system level changes too-- but it's those institutions that offer alternatives which need to become more sustainable, visible, and "prestige" for progress to keep happening