Joe Bak-Coleman
@jbakcoleman.bsky.social
9.3K followers 1.8K following 3.5K posts
Research Scientist at the University of Washington based in Brooklyn. Also: SFI External Applied Fellow, Harvard BKC affiliate. Collective Behavior, Statistics, etc..
Posts Media Videos Starter Packs
jbakcoleman.bsky.social
The gutting of CDC talent happening right now is just fucking scary.
jbakcoleman.bsky.social
I just got hit with a reviewer request for ai slop from a nature journal (and one of the exclusive ones..) so I’d wager a lot of this is just slop proliferating.
jbakcoleman.bsky.social
Bespoke: paying a paper mill to write your ChatGPT paper
jbakcoleman.bsky.social
Quality here is measured by impact factor, which…

My guess is the signal being picked up on is ai slop proliferation in the shoddy self-citing corner of scam journals.
jbakcoleman.bsky.social
Broke: ai slop infiltrating journals and citing itself
Woke: lots of high impact papers
jbakcoleman.bsky.social
I love the author didn’t even consider paper mills.
jbakcoleman.bsky.social
If you have theory there’s a lot less poking in the dark then it seems. You can intuit the functional forms from first principles and the distributions to be expected. A bit of algebra and statistical theory goes a long way towards a better model.
jbakcoleman.bsky.social
Something we don’t discuss enough are the trade offs between getting it wrong because the model/theory is shit and getting it wrong because of wishful modification to the model or data. The modal errors I see in papers are the former these days.
jbakcoleman.bsky.social
You go back and revise and the model fits like a glove to the bulk properties of the data across groups. Your decision isn’t conditioned on the difference in means, so you’re not coaxing your inferences but you’re working to avoid drawing the wrong inference from bad model fit.
jbakcoleman.bsky.social
As a simple example, say you’re interested in means and originally assume normal distributions for both groups. Your simulations come back fine but then you look at data simulated from the posterior and notice it’s extremely skewed by comparison. You adjust your model to allow for skew.
jbakcoleman.bsky.social
There are workflows and inferential approaches that let you walk and chew gum here. The insight is that you ground model development in theory and simulations first, then features of the data distinct from the inferences you want to draw.

betanalpha.github.io/assets/case_...
Towards A Principled Bayesian Workflow
betanalpha.github.io
jbakcoleman.bsky.social
kevinzollman.com
It's not that I think RFK is going to do good science on this topic. I think it's important to criticize it for the right reasons.

I think perpetuating the children's version of the scientific method leads to scientific skepticism.
jbakcoleman.bsky.social
There’s a risk of epistemic filtering within science being adopted by the admin that prioritizes some modes of inference over others and we can play into it or not. If all we can do is use preregister RCTs that test nulls (as suggested by the OP) we’re cooked for climate, vaccines.
jbakcoleman.bsky.social
Tu Quoque aside, it’s worth considering that maybe other areas of science function differently than you’re accustomed to and they’re as threatened by internal statements about how science should be done as the admin, and my points above are in response to that.
jbakcoleman.bsky.social
Yah they’re being disingenuous but their arguments that science needs to fit an absurd mold to be valid will survive a Daubert challenge unless we get better with precision than we have been.
jbakcoleman.bsky.social
I guess all I can say is that they cited this kind of idea heavily in their gold standard science policy as pretext to exert state control… it’s their stated justification and the one they will cite in courts. Either we can clarify in hopes the courts hold or let that fly as admissible evidence.
jbakcoleman.bsky.social
Perhaps Kevin said it simplest.

bsky.app/profile/kevi...
kevinzollman.com
It's not that I think RFK is going to do good science on this topic. I think it's important to criticize it for the right reasons.

I think perpetuating the children's version of the scientific method leads to scientific skepticism.
jbakcoleman.bsky.social
AI can design life is quite the fact check fail.

www.nytimes.com/2025/10/10/o...
jbakcoleman.bsky.social
We really need disciplinary sleep away camps in grad school. Go try and figure out why herpes is killing elephants, hit one with a carfentanyl dart, take a sample, wake them up with naloxone and run.

Then feel free to go off about sample size and statistical power.
Reposted by Joe Bak-Coleman
kevinzollman.com
A reoccurring frustration for philosophers of science: Many scientists know how to do science like people know how to ride a bike. When they reflect on the practice of science, they repeat platitudes about how science works. Those platitudes are often wrong, sometimes even about their own field
danhicks.bsky.social
*sighs in philosopher of science*

Looking for confirmatory evidence is an entirely normal part of science. The primary problem here is the eugenics and the fascism, not the lies to children about "the scientific method."
One Bluesky account is quoting another. Inner post has a video of RFK Jr., some person I don't recognize (Tylenol and autism guy, maybe?), Marco Rubio (I think), and Trump. Post text: "RFK Jr on Tylenol and autism: 'It is not proof. We're doing the studies to make the proof." 

Outer post text: "We're doing studies to prove it (* not how studies work)"
jbakcoleman.bsky.social
I think it’s as important in this moment as it’s ever been to be extremely precise in our critique. Perhaps OP really meant he didn’t think the science was good and said it in a strange way, but the waving away of the bullshit as p-hacking vs. saying why it’s bs is rampant

bsky.app/profile/bria...
briannosek.bsky.social
RFK Jr is the OG p-hacker.
atrupar.com
RFK Jr on Tylenol and autism: "It is not proof. We're doing the studies to make the proof."
jbakcoleman.bsky.social
Oh absolutely. And I think there’s an open question about whether some domains of science are more incentive resistance because signal and friction is higher. Certainly seems to be the case!