Robin Blythe
@rbly.bsky.social
330 followers 490 following 710 posts
Assistant professor at Duke-NUS medical school. Mostly interested in health economics, biostats and clinical informatics.
Posts Media Videos Starter Packs
rbly.bsky.social
I'll set up a sharable link after journal formatting!
rbly.bsky.social
New paper! I'll go into a bit more depth about it when I'm back from a work trip, but this was a fun study to conduct using #rstats and the peer review process was really excellent. #statsky
rbly.bsky.social
I've heard that there's sort of a "two schools" schism emerging in NLP, seemingly worse than what's been happening in the stats discourse. Guaranteed there are plenty of researchers running sensitive info through OpenAI servers
rbly.bsky.social
Dealing with terminology creep in healthcare is infuriating because docs throw around "AI" and they usually just mean a random forest but can't differentiate it from LLMs or even a logistic regression
rbly.bsky.social
The sad thing for me is the total dismissal of the foundational theory required to understand what is appropriate, why it works, and subtle differences in methods and applications that lead to huge differences in meaning. Coding is just the pen, without theory there is no meaning.
rbly.bsky.social
There's a good interview with Romney out there where he says people like Ron Johnson are lovably credulous about everything but he has no respect for liars like JD Vance
rbly.bsky.social
I'm sure there are a lot of pins like the National Lard that we are all too mature to make.
Reposted by Robin Blythe
mattansb.msbstats.info
There was an excellent tweet years ago that I can't find, someone announced their departure:

"I'm leaving academia to do the things I love most: research and teaching"
rbly.bsky.social
Lol there was absolutely a prodigious quantity of tomato juice being imbibed
rbly.bsky.social
Now that you mention it, I assumed those were Americans 🤔
rbly.bsky.social
You may be right, but the higher ups in my dept still spend a good 50% of their time (at least) doing the management part. Better than 100% I guess!
rbly.bsky.social
Reading this makes me think I'm maybe just lucky - I have enough free (work) time to pursue research I consider meaningful, and my funded research supports govt policymaking. What Mike describes is the black hole of grants and management I'm trying to avoid long term. Not sure how yet, though.
rbly.bsky.social
I am transiting SG-NY via Frankfurt. All older Germans on my flight are dressed as though they are prepared to scale Kilimanjaro at a moment's notice. Audible complaints were heard when the only remaining breakfast was spicy seafood noodles.
rbly.bsky.social
You can understand plenty about the mechanisms of the citric acid cycle. Still got to memorize a huge amount of shit though lol
rbly.bsky.social
Greatest con the fossil fuel companies ever pulled was by telling everyone that emissions are just down to us.
rbly.bsky.social
Agreed. Does my head in that people think AI is going to change all that somehow, completely ignoring everything we've learned in the last decade(s)
rbly.bsky.social
Lmao this would have been priceless to see
rbly.bsky.social
Yeah, main issue seems to be that everyone can get their hands on some data and can publish a model. I mean, that was part of what I did for my PhD. I'm not entirely sure the problem is just validation though, I tend to follow van Calster et al's thinking on this: doi.org/10.1186/s129...
There is no such thing as a validated prediction model - BMC Medicine
Background Clinical prediction models should be validated before implementation in clinical practice. But is favorable performance at internal validation or one external validation sufficient to claim that a prediction model works well in the intended clinical context? Main body We argue to the contrary because (1) patient populations vary, (2) measurement procedures vary, and (3) populations and measurements change over time. Hence, we have to expect heterogeneity in model performance between locations and settings, and across time. It follows that prediction models are never truly validated. This does not imply that validation is not important. Rather, the current focus on developing new models should shift to a focus on more extensive, well-conducted, and well-reported validation studies of promising models. Conclusion Principled validation strategies are needed to understand and quantify heterogeneity, monitor performance over time, and update prediction models when appropriate. Such strategies will help to ensure that prediction models stay up-to-date and safe to support clinical decision-making.
doi.org
rbly.bsky.social
You're correct. Standard practice in economic and health economic research but lots of assumptions and not easy to explain to laypeople. Researchers will often elicit a 'willingness-to-pay' value for the utility gained.
rbly.bsky.social
I've only rarely interacted with psych research, but I got the feeling from this paper by Gelman and Brown that there's a big issue of people forming preconceived notions then doing silly analyses to support them

sites.stat.columbia.edu/gelman/resea...
sites.stat.columbia.edu
rbly.bsky.social
> LinkedIn
Doctor, I've found the problem.
rbly.bsky.social
I think with regards to diagnosis more generally, there's a growing consensus away from that kind of thinking - I think @vickersbiostats.bsky.social has recently written about this. Positive signs things will improve eventually
rbly.bsky.social
I'm not even sure alert fatigue really captures the scope of the problem. It's not just alerts, it's the panoply of different decision aides that we seem to only add to in clinical systems. Plus, I think there's a disconnect between clinicians and informaticians re: perceived clinical utility.
rbly.bsky.social
That uncomfortable feeling where you upload a Quarto tutorial with your published paper, only to then realise that you left your stupid personal filepath in there when you rendered it ☠️ Absolutely gonna get roasted for this