Ingo Rohlfing
banner
ingorohlfing.bsky.social
Ingo Rohlfing
@ingorohlfing.bsky.social

I am here for all interesting and funny posts on the social sciences, broadly understood and including open science and meta science, academia, teaching and research. https://linktr.ee/ingorohlfing

Political science 30%
Sociology 17%
This is a HUGE win…and one that happened because we ~collectively~ said “NO!”

But AAAS coming in and saying on record to the NYT “Science is doing ok. Things are not bad at all…” is baffling.

If things are hard for you as a scientist, please share in the comments.

www.nytimes.com/2026/01/10/s...
Congress Is Reversing Trump’s Steep Budget Cuts to Science
www.nytimes.com
Working on a principled way to describe the sample size, N, in #science papers. Will be attempting to prove Holmes' theorem over the next few years. Feedback welcome 🙏 #stats

Ist das die Begründung? Meine erste Vermutung war, dass das Antragsvolumen zugenommen hat und dass man auch deshalb davon ausgeht, dass die Nutzung von LLMs zunimmt und man es ohnehin nicht eindämmen kann.
X/Twitter's rough full volume is around 500 total million posts every day, or 182 (and a half) billion posts per year.

By contracts, we found 11.2 million research posts in all of 2025 on there.

In other words, 0.000006% of Twitter appears to be sharing research. Basically zero.
Since Jan 1 2025, which feels like four trillion years ago, research has been shared on here 5 million whole-ass times. Bluesky recently passed 2 billion posts IN TOTAL.

So 0.25% of the entire site's traffic was citations to research.

That is actually massively high. Is it? Yes. Here's why.

Rejections for lack of clarity likely occur more often, but I can imagine one can be writing too clearly. In the sense that the theoretical argument may read too simple in the eyes of some, or that one is too transparent about the study's limitations (some kind of Streisand effect)
DFG erlaubt Einsatz von KI in der Begutachtung.

Wenn wir jetzt noch die AI dazu kriegen, die komplette Forschung zu machen, haben wir den Menschen komplett von der Last des Forschungsprozzeses befreit und er kann sich komplett auf wichtige Dinge wie Reisekostenerstattungsanträge konzentrieren.
Künstliche Intelligenz in der Begutachtung
www.dfg.de

Sorry, should have read "article", of course, not journal

For a journal of mine, I realized that the official citation on the publisher webpage conflates the year of the publication in an issue with the year of the online-first publication. I submitted a ticket two months ago asking to correct it, nothing has happened yet. Not so much added value here.
🧵 PhD position (75%) in Political Behavior / Political Communication / CSS
📍 LMU Munich | ⏳ 3 years | 🗓 start March–May 2026

We’re hiring for DemocraGPT, a @bidt.bsky.social-funded project developing an AI-based training for difficult conversations in times of growing polarization

Reposted by Ingo Rohlfing

A request for #rstats help.

Motivated by a real-world problem I'm facing, I wrote a package designed to help new users wean themselves off using rm(list=ls()), and nudge them in the direction of better practice.

I would sincerely appreciate feedback before I send it to CRAN
Some thoughts on checking the R session – Notes from a data witch
More precisely, some thoughts on an R package I might send to CRAN, and I’d appreciate comments and criticism
blog.djnavarro.net

Mark transmission and conserved quantity have been devised with an eye on physics / natural science, so I generally find it hard to transfer this to the social sciences.
For mechanistic theories that are more popular in qual political science, I find it hard to see the criterion for causal inf

Reposted by Ingo Rohlfing

Here's a question for the causal inference people: What is your take on causality without counterfactuals and/or causal inference without counterfactuals?
a cartoon of a man holding a box that says ' w ' on it
Alt: The Farnsworth Parabox
media.tenor.com

I like colorhighlighting in equations, either of the brackets or element of an equation. It requires much more typesetting, in particular if one would want to do in a colorblind-friendly manner, but probably worth it.

Übermorgen gefolgt von 20+ Predictions über was auch immer für 2026
The sale pitch for this AI Scientist "Kosmos" as presented in this Podcast just seems like a big HARKing exercise. Yes, just look at the data long enough and you'll find "something".

Has anybody made any experiences with AI scientists models? Recommendations?
pca.st/episode/58aa...
Where Is All the A.I.-Driven Scientific Progress?
pca.st

What's the occasion this time?

It is very short and not research, so one can hardly fool anyone, but I agree its misrepresentation should be avoided. Maybe the publisher messed it up, things can go wrong at many stages of the process.

I see, I believed it was only about fake citations. This is then indeed much harder.

It seems necessary now to validate citations. I assume there exists a tool for this or, if not, that one will be developed soon. One would not even need an "AI-powered" tool for this, just a tool for verified publication records, like OpenAlex maybe?

Reposted by Ingo Rohlfing

“I think there’s an argument to be made that much meta-scientific work is a kind of mirror image of the empirical work it critiques”
statmodeling.stat.columbia.edu/2025/12/21/i...
“I think there’s an argument to be made that much meta-scientific work is a kind of mirror image of the empirical work it critiques” | Statistical Modeling, Causal Inference, and Social Science
statmodeling.stat.columbia.edu
When viewing the fake article in Google scholar on my university network, there is a link to access the article via my uni's library. That link sends me to a library page that makes fake article appear real... Turns out library page is made programmatically from info on Google scholar 🤦

Perhaps Google Scholar is irrelevant enough to Google to be worth shutting down.

Of course, the overall problem is that citations and citation indices are taken to be important in the article and by many researchers. If we wouldn't care about citations, it would not matter that GS is screwing up metrics. 4/

Some researchers may have noticed their inflated citation counts and decided not to correct them. If so, this would be shortsighted because a cursory look at the cited publications most likely shows that something is off, devaluing the whole citation record of this person. 3/

With regard to Google Scholar, it reads like a "yes, but" argument. Google's careless assignment of publications to names is the root error. I guess Google could do much better, but, as the article states, Google does not care about at all about improving GS. 2/

Google Scholar’s citation errors skew h-index leaderboards
www.timeshighereducation.com/news/google-...
"Thousands of mistakenly awarded citations left uncorrected highlight the perils of leaving profile curation to academics, say critics"
I don't quite agree with the focus taken in this article 1/
The Max Planck Society has begun an exploratory round table for open science. We are drafting some recommendations to leadership. Still a long way to go! But here are my notes on the most recent draft, just so you all know how I am trying to steer things.