Joe Bak-Coleman
@jbakcoleman.bsky.social
9.3K followers 1.8K following 3.5K posts
Research Scientist at the University of Washington based in Brooklyn. Also: SFI External Applied Fellow, Harvard BKC affiliate. Collective Behavior, Statistics, etc..
Posts Media Videos Starter Packs
Reposted by Joe Bak-Coleman
carlbergstrom.com
Fustiliarian Friday: Trump/Kennedy/GOP use the shutdown as an excuse to destroy the CDC’s elite field epidemiology team and one of the most important epidemiology journals in the world.

Gift link.
Trump Administration Lays Off Dozens of C.D.C. Officials
www.nytimes.com
jbakcoleman.bsky.social
Science Taylorism strikes again.
jbakcoleman.bsky.social
Yeah, especially given figure 6 uptake is almost certainly driven by “oh shit I need to buy a paper”
jbakcoleman.bsky.social
I certainly believe folks could rip out more papers but to the extent that science (and any collective professional) thrives on diversity of experience and perspective it’s hard to see how driving us towards the most likely translates to scientific productivity broadly.
jbakcoleman.bsky.social
Yah every inch of the paper is a causal pitfall. Especially when you look at their model ran on unmatched data. Wild.
jbakcoleman.bsky.social
If it were helping with translation or whatnot we wouldn’t expect this because the language of the first draft is pinned down by the original text before assistance.

The quality indicator is equally explained by this as well, just a little less intuitively.
jbakcoleman.bsky.social
Figure 6, the effect sizes dwindle as you move away from the top single digit percentile of most ai heavy users. As we know some papers are *entirely* ai slop, these stricter criteria can be viewed as containing a higher abundance of slop.

In other words, these gains are happening for ai slop.
jbakcoleman.bsky.social
The gutting of CDC talent happening right now is just fucking scary.
jbakcoleman.bsky.social
I just got hit with a reviewer request for ai slop from a nature journal (and one of the exclusive ones..) so I’d wager a lot of this is just slop proliferating.
jbakcoleman.bsky.social
Bespoke: paying a paper mill to write your ChatGPT paper
jbakcoleman.bsky.social
Quality here is measured by impact factor, which…

My guess is the signal being picked up on is ai slop proliferation in the shoddy self-citing corner of scam journals.
jbakcoleman.bsky.social
Broke: ai slop infiltrating journals and citing itself
Woke: lots of high impact papers
jbakcoleman.bsky.social
I love the author didn’t even consider paper mills.
jbakcoleman.bsky.social
If you have theory there’s a lot less poking in the dark then it seems. You can intuit the functional forms from first principles and the distributions to be expected. A bit of algebra and statistical theory goes a long way towards a better model.
jbakcoleman.bsky.social
Something we don’t discuss enough are the trade offs between getting it wrong because the model/theory is shit and getting it wrong because of wishful modification to the model or data. The modal errors I see in papers are the former these days.
jbakcoleman.bsky.social
You go back and revise and the model fits like a glove to the bulk properties of the data across groups. Your decision isn’t conditioned on the difference in means, so you’re not coaxing your inferences but you’re working to avoid drawing the wrong inference from bad model fit.
jbakcoleman.bsky.social
As a simple example, say you’re interested in means and originally assume normal distributions for both groups. Your simulations come back fine but then you look at data simulated from the posterior and notice it’s extremely skewed by comparison. You adjust your model to allow for skew.
jbakcoleman.bsky.social
There are workflows and inferential approaches that let you walk and chew gum here. The insight is that you ground model development in theory and simulations first, then features of the data distinct from the inferences you want to draw.

betanalpha.github.io/assets/case_...
Towards A Principled Bayesian Workflow
betanalpha.github.io
jbakcoleman.bsky.social
kevinzollman.com
It's not that I think RFK is going to do good science on this topic. I think it's important to criticize it for the right reasons.

I think perpetuating the children's version of the scientific method leads to scientific skepticism.
jbakcoleman.bsky.social
There’s a risk of epistemic filtering within science being adopted by the admin that prioritizes some modes of inference over others and we can play into it or not. If all we can do is use preregister RCTs that test nulls (as suggested by the OP) we’re cooked for climate, vaccines.
jbakcoleman.bsky.social
Tu Quoque aside, it’s worth considering that maybe other areas of science function differently than you’re accustomed to and they’re as threatened by internal statements about how science should be done as the admin, and my points above are in response to that.
jbakcoleman.bsky.social
Yah they’re being disingenuous but their arguments that science needs to fit an absurd mold to be valid will survive a Daubert challenge unless we get better with precision than we have been.
jbakcoleman.bsky.social
I guess all I can say is that they cited this kind of idea heavily in their gold standard science policy as pretext to exert state control… it’s their stated justification and the one they will cite in courts. Either we can clarify in hopes the courts hold or let that fly as admissible evidence.
jbakcoleman.bsky.social
Perhaps Kevin said it simplest.

bsky.app/profile/kevi...
kevinzollman.com
It's not that I think RFK is going to do good science on this topic. I think it's important to criticize it for the right reasons.

I think perpetuating the children's version of the scientific method leads to scientific skepticism.