Henrik Warpefelt
@warpefelt.com
110 followers 130 following 66 posts
Science mercenary and magical thinking rock enthusiast. I mostly talk about data, AI, and games. Consulting inquiries: https://www.warpefelt.com/
Posts Media Videos Starter Packs
warpefelt.com
Overall I think we'll continue to see a lot of productivity increases from genAI, but as it stands it's not going to overthrow entire industries. Like any technological revolution it is going to change how people do their jobs and which ones they do. Think plate setting in the analog vs digital era.
warpefelt.com
One thing that I think is missing from this report is also the impact of AI on small- to medium business. AI can assist in many ways (marketing etc as above) but rolling out dedicated solutions for smaller businesses is still problematic. A non-giant company can't really bet big on a 5% chance
warpefelt.com
Finally: I remain moderately bearish on generative AI as a complete disruptor of every single industry. It's having a big impact on tech and media, but I strongly suspect we'll see that wind change in the next few years as we discover the limits of current genAI tech.
warpefelt.com
8. Employment in most industries isn't actually that affected by generative AI. Tech and media are as mentioned being hit heavily, but other industries aren't expected to see much change. The promised AI revolution is still not truly on the horizon. AI seems to be a good tool but not a killer app.
warpefelt.com
7. The way to win in the AI race is to land small and visible wins in narrow workflows. Fast deployment and integration are preferable to massive systems. Extrapolating from the report this is likely connected to the wide shadow usage of LLMs: it's super easy to just open a webpage and "LLM stuff"
warpefelt.com
6b. This really hammers home an important message: Data is a commodity and having it leak can be disastrous for both people and companies. What happens to stuff put into LLMs is a huge security concern, which makes the potential shadow usage a lot more problematic.
warpefelt.com
6. Trust is a key concern for enterprises. Companies want to trust the vendor, trust that their data is handled properly, and that the vendor understands the company's workflow and adapt as needed. Companies also want to see improvement over time and minimal disruption to existing tools.
warpefelt.com
5. The main blocker for AI tool adoption is that the tools are kind of bad. The UX is poor, the output is of poor quality, or tools don't work as expected. The report also highlights the lack of adaptability in tools as a major issue. Basically, ChatGPT is still better than internal tools.
warpefelt.com
4. Most AI investment (50%) is in sales and marketing. This tracks anecdotally: LLMs are good at generating derivative text and advertising is pretty repetitive. It's probably easier for companies to get good ROI on investments here.
warpefelt.com
3. There's a HUGE shadow economy for AI usage! Only about 40% of companies surveyed have an enterprise LLM subscription, but workers from 90% of the companies surveyed use LLMs regularly. People seem ready to adopt LLMs as tools, but corporations are lagging. This could be a massive infosec problem!
warpefelt.com
2b. General purpose AI projects (think GPT/Claude wrapper) do much better - about a 40% success rate. That actually beats the overall software project success rate, although these are measured using different metrics so comparability is difficult.
warpefelt.com
2. AI projects just aren't very successful. As per the report, only 5% of custom enterprise AI tools actually go into production. The Standish Group in 2020 that about 31% of software projects succeed, so 5% is pretty dire even by software standards and accounting for different measurement modes.
warpefelt.com
1. Most industries are seeing low disruption, with the exceptions of Tech & Media. Not entirely shocking considering what kinds of AI services exist, i.e. text and media generation. Tech is also not super regulated, and has a tradition of moving fast and breaking things.
warpefelt.com
This is a really interesting report with a somewhat misleading title. Come along as I ramble about AI adoption and this report🧵

Also, in case anyone was wondering you can't just "slap some AI on it" and expect revolutionary change. It just doesn't work like that.
AI-Generated “Workslop” Is Destroying Productivity
Despite a surge in generative AI use across workplaces, most companies are seeing little measurable ROI. One possible reason is because AI tools are being used to produce “workslop”—content that appea...
hbr.org
warpefelt.com
Paper tl;dr: We construct 3 nested concepts (landmarks, monuments, beacons) that help us describe how complex generated artifacts are perceived by players, and how these artifacts can be composed to support player understanding of the game world.
Reposted by Henrik Warpefelt
Reposted by Henrik Warpefelt
alexblechman.bsky.social
Tech Guy: I have attained immortality!

Me: Wow, you figured out how to stop aging?

Tech Guy: No, I trained an AI on my emails to create a perfect copy of my personality. Then I uploaded it into a robot dog with M3GAN’s face. It will live forever

Robot: As a businessman I often schedule meetings
warpefelt.com
For the last year I've had the privilege of working with the Kennesaw State University (@kennesawstate.bsky.social‬) Game Studio, and now our first student-developed game is finally ready for release on Steam. Presenting Chiba, probably the cutest Sokoban-style game on the market!
Chiba on Steam
A colorful pixel art box pushing puzzle game where you control a culinary canine. With a variety of ingredients at disposal, utilize scorching grills, slippery butter, and sticky syrup as you solve al...
store.steampowered.com
warpefelt.com
Holy crap. That's a really, really good point. That makes this even more of an ethics nightmare.
warpefelt.com
Dr Fiesler is an actual ethicist and has a more informed take than mine.
cfiesler.bsky.social
Another week, another research ethics controversy.

TL;DR Researchers released a public dataset of 2B+ messages from 4M+ users on 3k+ "public" Discord servers. Usernames/IDs are anonymized.

But let's unpack this one... 🧵

www.404media.co/researchers-...
Researchers Scrape 2 Billion Discord Messages and Publish Them Online
A Brazilian team used Discord’s API to scrape 10% of its open servers.
www.404media.co
warpefelt.com
The ethical way of doing this would be as some variant of participant observation, possibly with some kind of digital aid like scraping. However, acquiring consent from these communities is CRITICAL for a study like this. In this case consent was obviously not acquired, which is deeply problematic.
warpefelt.com
In essence, the social contract is that you join a Discord *community*. The idea is that you participate in equal terms with the other people in that community. However, these researchers didn't participate. They just scraped the data and didn't contribute to the community.
warpefelt.com
In their defense the authors say that they used public Discord servers and anonymized user data, but this only prevents part of the harm to the users in these servers. There could be an argument here for this being publicly available data, but I don't think that holds water.
warpefelt.com
To clarify the problem: This paper uses scraped data from a bunch of Discord server in violation of Discord's data scraping policies. The authors claim to have gotten consent as per the ArXiV paper checklist, but the word "consent" doesn't appear in the paper.