Jason Gantenberg
@jrgant.bsky.social
2.2K followers 990 following 1K posts
Research Scientist + Asst Prof (Practice) @ Brown SPH. Interested in epidemiology, stats, modeling, ID, complexity, causality. Know just enough to be dangerous. Current quest: never leave Emacs. (Personal account. Opinions not even my own.)
Posts Media Videos Starter Packs
Pinned
jrgant.bsky.social
In general, there are two reasons you might find a particular claim or viewpoint to be absurd.

SCENARIO 1: The viewpoint is absurd.
SCENARIO 2: You don't know what you're talking about.

Most of us will be in Scenario 2 more often than we'd like to admit.
jrgant.bsky.social
Palmer is a libertarian, so when he says "liberalism" he's referring to the small-l variety. I agree wholeheartedly with this quote.
andycraig.bsky.social
Pleasure to intro this. @tomgpalmer.bsky.social: "We, not the nativists and neo-Confederates, are... for the turning points in history at which Americanism carried the day against the forces of unfreedom. Liberals own the flag. We own the Constitution. We own the greatest chapters of American life."
at the podium introducing a panel seated on the stage behind
jrgant.bsky.social
I admit, I had to look it up
jrgant.bsky.social
That warm feeling when someone cites your paper for... *skims article*... the background claim for which you cited the original source.
jrgant.bsky.social
He will get much use out of both!
jrgant.bsky.social
We have established that were you to combine the worst comedic timing, instincts, skill, and craft, you would come up with Adam Sandler. If you add to that mixture the sun shining on a horse's ass twice a day, you'd get Will Ferrell.
jrgant.bsky.social
So I added to the chat a little bit and gave it 5.90 = x + 5.11. Correctly reports 0.79.

However, I then suggested that someone on BlueSky said it was probably using software versioning as a heuristic. It agreed and said, "you're right, I'm sorry... that's why the correct answer is ... -0.21". :)
jrgant.bsky.social
I know the anchoring stuff has come under suspicion, but I must sheepishly admit that my knee-jerk guffaw was "obviously, the answer is -0.021". So I conjecture that LLMs can make you momentarily dumb as bricks by beginning with bullshit framing. :) Alas, my human brain.
jrgant.bsky.social
Oh yeah. I have no illusions in that regard. I ask out of idle curiosity at what it'll come up with.
jrgant.bsky.social
Definitely, but they should probably use other language, like you said. I suspect casual users take the "thinking" part at face value. But maybe I'm just in a bad mood.
jrgant.bsky.social
So here's something interesting when you scroll past my half-assed attempts at getting Gemini to explain how it generates the thinking: it triples down on -0.21 being the right answer. It even says R and Python are wrong!

g.co/gemini/share...
‎Gemini - Solving a Simple Algebraic Equation
Created with Gemini
g.co
jrgant.bsky.social
Thank you
statsepi.bsky.social
Papers often conclude "more research is needed" without explanation. This is a missed opportunity. You are the expert. This is your time to shine. Explain what the remaining uncertainties are, and give justified recommendations on what the research needed to resolve them should look like.
jrgant.bsky.social
Yeah, agree. My suspicion is that the "thinking" is just some plausible-sounding LLM gobbledygook, but I could be wrong.