Harrison Pim
@harrisonpim.com
76 followers 400 following 38 posts
I'm working on search, machine learning, and knowledge graphs at climatepolicyradar.org | harrisonpim.com
Posts Media Videos Starter Packs
harrisonpim.com
> Although the inference server itself can be claimed to be "deterministic", the story is different for an individual user. From the perspective of an individual user, the other concurrent users are not an "input" to the system but rather a nondeterministic property of the system.
Defeating Nondeterminism in LLM Inference
Reproducibility is a bedrock of scientific progress. However, it’s remarkably difficult to get reproducible results out of large language models. For example, you might observe that asking ChatGPT the...
thinkingmachines.ai
Reposted by Harrison Pim
mattmuir.bsky.social
There are just under 12 hours left to vote in this year's Tiny Awards! Small, fun, beautiful and occasionally-pointless websites, and a pleasing rebuttal to anyone who thinks everything online is rubbish in 2025: tinyawards.net/vote/
Tiny Awards
This is the home of the Tiny Awards, which, since 2023, has celebrated the best of the small, poetic, creative, handmade web.
tinyawards.net
Reposted by Harrison Pim
gracekind.net
Grace @gracekind.net · Aug 19
What sort of black magic is this
harrisonpim.com
for some extra context, this excellent piece by @andymasley.bsky.social makes a very convincing argument that the environmental impact of LLMs is overstated, all while assuming that LLMs use 3 Wh per query (roughly 10x higher than the actual numbers!)
Using ChatGPT is not bad for the environment
And a plea to think seriously about climate change without getting distracted
andymasley.substack.com
harrisonpim.com
those numbers are roughly in line with sam altman's claim that the average chatgpt request uses 0.34 Wh.

all of these values are also much, much lower than those assumed in a lot of the discourse around LLMs!
The Gentle Singularity
We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be. Robots...
blog.samaltman.com
harrisonpim.com
I'll be talking about @climatepolicyradar.bsky.social's work on building knowledge graphs for policy research at @nestauk.bsky.social's Policy Live event on 11 September
Policy Live 2025 - Homepage
www.policylive.org
harrisonpim.com
This will be my first time at ACL, and I’m very excited about the opportunity to meet and catch up with other folks doing work at the frontiers of NLP research :) drop me a message if you’re around!
harrisonpim.com
I’ll be talking about the knowledge graph that we’re building at @climatepolicyradar.bsky.social - By weaving together the connections between policy documents from all over the world, we’re hoping to make them easier to explore, understand, and explain
harrisonpim.com
I’m on my way to Vienna for @aclmeeting.bsky.social, where I’ll be giving a keynote for the @climate-nlp.bsky.social workshop on how NLP is being used to address the climate crisis
harrisonpim.com
But in this, @aworkinglibrary.com gets straight at the bit which really does worry me.

"AI", not as a technology, but as an ideology, is being used as a prop in the revival of scientific racism, and old, eugenicist hierarchies of intelligence.

It's a very good essay
Toolmen
Even the best weapon is an unhappy tool.
aworkinglibrary.com
harrisonpim.com
So much of the recent AI doomerism (environmental, economic, existential, etc) has felt dumb to me... The criticisms seem reactionary, sensationalist, poorly researched, wilfully blind. Anyone with direct experience knows that the reality of the technology is much more mundane than the hype
Reposted by Harrison Pim
dcorney.com
Some thoughts on how "LLM grooming" could be used to fill information voids and thus further degrade information found on the web. Happy times!
dcorney.com/thoughts/202...
What happens when "LLM grooming" fills an "information void"?
dcorney.com
Reposted by Harrison Pim
minimaxir.bsky.social
Sam Altman just gave ChatGPT's cost-per-query of 0.34 watt-hours: the first time a number has been given in terms of recent LLM power usage and is obviously much lower than the 3 watts still cited by detractors, but there's a lot of asterisks in how that number might be calculated.
As datacenter production gets automated, the cost of intelligence should eventually converge to near the cost of electricity. (People are often curious about how much energy a ChatGPT query uses; the average query uses about 0.34 watt-hours, about what an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes. It also uses about 0.000085 gallons of water; roughly one fifteenth of a teaspoon.)
Reposted by Harrison Pim
verysane.ai
The most well-known statistic about AI water use is a lie. This makes it frustrating to talk about AI and the environment, and this is a long deep dive on that specific point.

www.verysane.ai/p/the-bigges...
The Biggest Statistic About AI Water Use Is A Lie
How did it become the main story?
www.verysane.ai
harrisonpim.com
the tone's provocative, but there are loads of sensible, software-development-centric takes about LLM hype vs reality in here
harrisonpim.com
I’ve had such a brilliant weekend at @pydatalondon.bsky.social. So many great presentations, so many great chats with pals old and new. Really feeling the value of the data science community which has grown and cohered here over the last decade. Massive, massive thanks to the organisers 💖
harrisonpim.com
oh my GOD i love pydata events so much. so happy to be back in these spaces with these people
Reposted by Harrison Pim
Reposted by Harrison Pim
fullfact.org
Today we launch our report on the rising threat of misinformation in the UK—featuring expert essays and a new rating system assessing policy, platforms, and progress in tackling false information online.

Read it here: buff.ly/Uz6pzTJ
Access to accurate information is not a luxury, it is the foundation of our democracy. We cannot let large online platforms which wield so much influence over our daily lives walk away from commitments to make our online world a safer place. 

Government and regulators must hold them to account, to the full extent of the law. This is no time for half measures.
harrisonpim.com
IMO the more interesting question is whether this sort of tarpit could be used to maliciously steer LLM training, beyond just adding noise. given a large/subtle enough maze, can a more pointed set of disinformation be baked into LLMs' circuits?
harrisonpim.com
not sure one can effectively fight slop by generating even-sloppier-slop