steve
@visitmy.website
1K followers 460 following 2.6K posts
workplace bez. doing product & strategy bits thru https://boringmagi.cc. more about me at https://visitmy.website/about
Posts Media Videos Starter Packs
Pinned
visitmy.website
In my weeknotes yesterday, I thought about writing something on AI.

We’re in the boom of a hype cycle and there’s plenty of snake oil salesmen out there, which I want to distance myself from. So I’ve written up my positions on #GenerativeAI. boringmagi.cc/2024/12/08/o...
Our positions on generative AI
Given the hype around artificial intelligence, it feels worthwhile stating our positions on the technology. We’re open to working on and with it, but there’s a few ideas we’ll bring to the table.
boringmagi.cc
Reposted by steve
blangry.bsky.social
I'm obviously going to defend anthropology as a Useful Business Discipline. Learning how groups work together, how to get to the heart of something and think about patterns and trends feels like as much a part of the jigsaw of skills you need in an org as, apparently, theoretical physics(?).
visitmy.website
Morale collapse is the enemy of big organisations.

Tactical and strategic missteps can be analysed and accounted for, you can take new steps. But morale directly influences performance over time, and morale is degraded in unfit cultures.
visitmy.website
Mission-driven government is a myth. It’s a nice idea but there is no framework, no cascading mandate, no way to actually weigh up the hard trade-offs, and cross-sectoral working left aside while you spend inordinate amounts of time getting buy-in *within* an organisation.
visitmy.website
Big organisations are layered, messy things. Many people can try to do the right thing in one area but there is the chance that efforts will be scuppered by others elsewhere.
visitmy.website
Honestly though, much of the last few months has been me saying we don’t have enough people – especially not enough makers – to build things. Only for our team to be further squeezed by bad contracts, reduced headcount or glacial hiring.
Reposted by steve
justinhendrix.bsky.social
Also recommend this article:

bsky.app/profile/just...
justinhendrix.bsky.social
"OpenAI has signed about $1tn in deals this year for computing power to run its artificial intelligence models, commitments that dwarf its revenue and raise questions about how it can fund them."
OpenAI’s computing deals top $1tn
Partners including Nvidia, AMD and Oracle have signed up to Sam Altman’s huge bet on the future of artificial intelligence
www.ft.com
Reposted by steve
justinhendrix.bsky.social
Important article for anyone following AI and AI policy.

"In the process, [NVIDIA and OpenAI are] now seen as playing a key role in ratcheting up the risks of a possible AI bubble by inflating the market and binding the fates of numerous companies together. "
carlquintanilla.bsky.social
NVIDIA and OpenAi:

Concerns that their “increasingly complex and interconnected web of business transactions is artificially propping up the trillion-dollar AI boom.“

@bloomberg.com $NVDA 👀
www.bloomberg.com/news/feature...
visitmy.website
this one had equal parts jelly, custard and cream. and a surprising amount of booze. but fresh raspberries instead of tinned fruit!
visitmy.website
i am not 80 years old btw
visitmy.website
can i just shock you? the other day i enjoyed a trifle
visitmy.website
Haha, would love to try Parakeet v2 that way!
visitmy.website
It’s insurance without the underwriting.
visitmy.website
Transcription in Microsoft Teams is a placebo effect for institutional memory. You cannot use them to pull out actions or decisions because it often mishears you. Sentences barely make any sense.
visitmy.website
Just realised that quote doesn’t capture what I wanted to highlight, which is exactly what you said. The better quote is this one! (They also talk about artifacts being transient, like ideas...)
visitmy.website
boringmagi.cc.web.brid.gy
Our positions on generative AI
Like many trends in technology before it, we’re keeping an eye on artificial intelligence (AI). AI is more of a concept, but generative AI as a general purpose technology has come to the fore due to recent developments in cloud-based computation and machine learning. Plus, technology is more widespread and available to more people, so more people are talking about generative AI – compared to something _even more_ ubiquitous like HTML. Given the hype, it feels worthwhile stating our positions on generative AI – or as we like to call it, ‘applied statistics’. We’re open to working on and with it, but there’s a few ideas we’ll bring to the table. ## The positions 1. Utility trumps hyperbole 2. Augmented not artificial intelligence 3. Local and open first 4. There will be consequences 5. Outcomes over outputs ### Utility trumps hyperbole The fundamental principle to Boring Magic’s work is that people want technologies to work. People prefer things to be functional first; the specific technologies only matter when they reduce or undermine the quality of the utility. There are outsized, unfounded claims being made about the utility of AI. It is not ‘more profound than fire’. The macroeconomic implications of AI are often overstated, but it’ll still likely have an impact on productivity. We think it’s sensible to look at how generative AI can be useful or make things less tedious, so we’re exploring the possibilities: from making analysis more accessible through to automating repeatable tasks. We won’t sell you a bunch of hype, just deliver stuff that works. ### Augmented not artificial intelligence Technologies have an impact on the availability of jobs. The introduction of the digital spreadsheet meant that chartered accountants could easily punch the numbers, leading to accounting clerks becoming surplus to requirements. Jevon’s paradox teaches us that AI will lead to more work, not less. Over time accountants needed fewer clerks, but increases in financial activity have lead to a greater need for auditors. So we will still need people in jobs to do thinking, reasoning, assessing and other things people are good at. Rather than replacing people with machines to reduce costs, technology should be used to empower human workers. We should augment the intelligence of our people, not replace it. That means using things like large language models (LLMs) to reduce the inertia of the blank page problem, helping with brainstorming, rather than asking an LLM to write something for you. Extensive not intensive technology. ### Local and open first Right now, we’re in a hype cycle, with lots of enthusiasm, funding and support for generative AI. The boom of a hype cycle is always followed by a bust, and AI winters have been common for decades. If you add AI to your product or service and rely on a cloud-based supplier for that capability, you could find the supplier goes into administration – or worse, enshittification, when fees go up and the quality of service plunges. And free services are monetised eventually. But there are lots of openly-available generative text and vision models you can run on your own computer – your ‘local machine’ – breaking the reliance on external suppliers. When exploring how to apply generative AI to a client’s problem, we’ll always use an open model and run it locally first. It’s cheaper than using a third party, and it’s more sustainable too. It also mitigates some risks around privacy and security by keeping all data processing local, not running on a machine in a data centre. That means we can get started sooner and do a data protection impact assessment later, when necessary. We can use the big players like OpenAI and Anthropic if we need to, but let’s go local and open first. ### There will be consequences People like to think of technology as a box that does a specific thing, but technology impacts and is impacted by everything around it. Technology exists within an ecology. It’s an inescapable fact, so we should try to sense the likely and unlikely consequences of implementing generative AI – on people, animals, the environment, organisations, policy, society and economies. That sounds like a big project, but there are plenty of tools out there to make it easier. We’ve used tools like consequence scanning, effects mapping, financial forecasting, Four Futures and other extrapolation methods to explore risks and harms in the past. As responsible people, it’s our duty to bring unforeseen consequences more into view, so that we can think about how to mitigate the risks or stop. ### Outcomes over outputs It feels like everyone’s doing something with generative AI at the moment, and, if you’re not, it can lead to feeling left out. But this doesn’t mean you have to do something: FOMO is not a strategy. We’ll take a look at where generative AI might be useful, but we’ll also recommend other technologies if those are cheaper, faster or more sustainable. That might mean implementing search and filtering instead of a chatbot, especially if it’s an interface that more people are used to. It’s more important to get the job done and achieve outcomes, instead of doing the latest thing because it’s cool. ## Let’s be pragmatic Ultimately our approach to generative AI is like any other technology: we’re grounded in practicality, mindful of being responsible and ethical, and will pursue meaningful outcomes. It’s the best way to harness its potential effectively. Beware the AI snake oil.
boringmagi.cc
visitmy.website
This is an example of why I wrote that ‘Local and open first’ position on using generative AI last December.
smcgrath.phd
Looking at GenAI costs in healthcare: For medical billing, a local AI model was more accurate and faster than GPT-4. Meanwhile, using commercial LLMs at scale could incur annual API costs of $115k to $4.6M, posing a significant financial challenge for healthcare systems.
#MedSky #MedAI #MLSky
Generative AI costs in large healthcare systems, an example in revenue cycle - npj Digital Medicine
npj Digital Medicine - Generative AI costs in large healthcare systems, an example in revenue cycle
www.nature.com
visitmy.website
Happy birthday!
Reposted by steve
bencollins.bsky.social
This is the funniest thing AI has ever done.
criminalerin.bsky.social
TFW you're sent to AIHR for using AI at work
visitmy.website
The thought of switching from product to engineering crossed my mind this week. 16-year-old me really wanted to be a programmer, but no one around me knew it was a decent career. So took a humanities route instead. Now I do computers anyway (and love it!).

Maybe I should go back to 16-y.o. me