paulwalk
@paulwalk.mastodon.social.ap.brid.gy
2 followers 1 following 140 posts
Technical consultant and developer, working with Web technologies to support open access to research. Lives in Frome, UK. #openaccess #opendata #scholcomm […] [bridged from https://mastodon.social/@paulwalk on the fediverse by https://fed.brid.gy/ ]
Posts Media Videos Starter Packs
paulwalk.mastodon.social.ap.brid.gy
It is, apparently, not possible to get to Birmingham today by rail. The train I just booked to replace the cancelled train I was previously booked on has been cancelled.

UK rail is a joke :-(
paulwalk.mastodon.social.ap.brid.gy
@hochstenbach

Your "A 3-pronged approach for simple bots" GitHub issue is proving to be very useful - thanks!

It's especially helping me to sort through the pile of "mitigations" where actually some of these things are "tests" and others are "penalties" or "counter-measures".

I'll respond on […]
Original post on mastodon.social
mastodon.social
paulwalk.mastodon.social.ap.brid.gy
Governance of community-maintained codebases is tricky. This move from Ruby Central to essentially take control of the RubyGems codebase(s) is worrying.

This is the best summary of these developments that I have found.

"What Just Happened to RubyGems?" […]
Original post on mastodon.social
mastodon.social
paulwalk.mastodon.social.ap.brid.gy
@neonbubble this one just made me laugh out loud. Shared it with herself and she laughed too 😁
paulwalk.mastodon.social.ap.brid.gy
“…we have a recipe for what’s effectively a homeopathic superstition spreading like wildfire through a community where everybody is getting convinced it’s making them healthier, smarter, faster, and more productive.”
https://www.baldurbjarnason.com/2025/followup-on-trusting-your-own-judgement/
Avoiding generative models is the rational and responsible thing to do – follow-up to “Trusting your own judgement on ‘AI...’”
I don’t recommend publishing your first draft of a long blog post. It’s not a question of typos or grammatical errors or the like. Those always slip through somehow and, for the most part, don’t impact the meaning or argument of the post. No, the problem is that, with even a day or two of distance, you tend to spot places where the argument can be simplified or strengthened, the bridges can be simultaneously strengthened and made less obvious, the order can be improved, and you spot which of your darlings can be killed without affecting the argument and which are essential. Usually, you make up for missing out on the insight of distance with the insight of others once you publish, which you then channel into the next blog post, which is how you develop the bad habit of publishing first drafts as blog posts, but in the instance of my last blog post, _Trusting your own judgement on ‘AI’ is a huge risk_, the sheer number of replies I got was too much for me to handle, so I had to opt out. So, instead I let creative distance happen – a prerequisite to any attempt at self-editing – by working on other things and taking walks. During one of those walks yesterday, I realised it should be possible to condense the argument quite a bit for those who find 3600 words of exposition and references hard to parse. It comes down to four interlocking issues: 1. _It’s next to impossible for individuals to assess the benefit or harm of chatbots and agents through self-experimentation._ These tools trigger a number of biases and effects that cloud our judgement. Generative models also have a volatility of results and uneven distribution of harms, similar to pharmaceuticals, that means it’s impossible to discover for yourself what their societal or even organisational impact will be. 2. _Tech, software, and productivity research is extremely poor and is mostly just marketing_ – often replicating the tactics of the homeopathy and naturopathy industries. Most people in tech do not have the training to assess the rigour or validity of studies in their own field. (You may disagree with this, but you’d be wrong.) 3. _The sheer magnitude of the “AI” Bubble and the near totality of the institutional buy-in – universities, governments, institutions – means that**everybody is biased**._ Even if you aren’t biased yourself, your manager, organisation, or funding will be. Even those who try to be impartial are locked in bubble-inflating institutions and will feel the need to protect their careers, even if it’s only unconsciously. More importantly, there is no way for the rest of us to know the extent of the effect the bubble has on the results of each individual study or paper, so we have to assume it affects all of them. Even the ones made by our friends. _Friends can be biased too._ The bubble also means that the executive and management class can’t be trusted on anything. Judging from prior bubbles in both tech and finance, the honest ones who understand what’s happening are almost certainly already out. 4. _When we only half-understand something, we close the loop from observation to belief by relying on the judgement of our peers and authority figures,**but these groups in tech are currently almost certain to be wrong or substantially biased about generative models.**_ This is a technology that’s practically tailor-made to be only half-understood by tech at large. They grasp the basics, maybe some of the details, but not fully. The “halfness” of their understanding leaves cognitive space that lets that poorly founded belief adapt to whatever other beliefs the person may have and whatever context they’re in without conflict. Combine these four issues and we have a recipe for what’s effectively a homeopathic superstition spreading like wildfire through a community where everybody is getting convinced it’s making them healthier, smarter, faster, and more productive. This would be bad under any circumstance but the harms from generative models to education, healthcare, various social services, creative industries, and even tech (wiping out entry-level programming positions means no senior programmers in the future, for instance) are shaping up to be massive, the costs to run these specific kinds of models remain much higher than the revenue, and the infrastructure needed to build it is crowding out attempts at an energy transition in countries like Ireland and Iceland. If there ever was a technology where the rational and responsible act was to hold off and wait and until the bubble pops, “AI” is it.
www.baldurbjarnason.com
paulwalk.mastodon.social.ap.brid.gy
Microsoft CEO makes claim about economic growth based on no evidence, in much the same way that his stupid “AI” product does… Inevitably, the U.K. government enthusiastically grabs this “opportunity” 🙁
https://www.bbc.com/news/articles/c7016ljre03o
AI could boost UK economy by 10% in 5 years, says Microsoft boss
Boss Satya Nadella said it was the biggest investment the firm had made outside of the US.
www.bbc.com
Reposted by paulwalk
paulwalk.mastodon.social.ap.brid.gy
Taken with a smart phone camera - they really are impressive cameras these days! (No post processing other than cropping)
paulwalk.mastodon.social.ap.brid.gy
I haven't seen "guru meditation" in an error message since the days of the Amiga computer!

This is from a website in Nepal - seems unlikely they are running it on an Amiga 500 :-)
paulwalk.mastodon.social.ap.brid.gy
Register for Samvera Connect 2025, October 20--23 at El Colegio de México in Mexico City. We need to get an estimate of numbers *this week*, so if you are planning to come but are unable to register immediately, please can you let me know *as soon as possible* about your intention to register […]
Original post on mastodon.social
mastodon.social
paulwalk.mastodon.social.ap.brid.gy
@mike @carusb @Waxingtonknee +1 for buying beans and a grinder. Beans are easily available in any supermarket. Makes a difference and only takes 30 secs to grind a week’s worth.
Reposted by paulwalk
timwardcam.c.im.ap.brid.gy
@paulwalk Does anyone have any experience of doing code reviews on AI generated code? Is that quicker or slower than doing code reviews of human written code? Are there issues with getting the author to explain bits of it where necessary?
paulwalk.mastodon.social.ap.brid.gy
@TimWardCam these are very good questions, to which I do not have answers! I’ll boost your question - perhaps others can supply links or evidence
paulwalk.mastodon.social.ap.brid.gy
“You could not tell looking at these charts when AI-assisted coding became widely adopted. The core premise is flawed. Nobody is shipping more than before.

The impact on human lives is incredible. People are being fired because they’re not adopting these tools fast enough […]

This whole thing […]
Original post on mastodon.social
mastodon.social
paulwalk.mastodon.social.ap.brid.gy
“Momentum”?! Do you mean Trump’s bullshit claims and Putin’s scorn?
paulwalk.mastodon.social.ap.brid.gy
@mike yes - it just feels like a safe, secure environment where once you have something nailed down it's going to stay put for a long time.
paulwalk.mastodon.social.ap.brid.gy
I've now completed some small projects in Rust - enough to make a reasonable evaluation in comparison to Golang.

I think I've decided that Golang's boring predictability trumps Rust's cool cleverness. After 3 months, and completing a few small Rust projects, I still can't easily read and […]
Original post on mastodon.social
mastodon.social