Ethan Mollick
@emollick.bsky.social
31K followers 150 following 1.7K posts
Professor at Wharton, studying AI and its implications for education, entrepreneurship, and work. Author of Co-Intelligence. Book: https://a.co/d/bC2kSj1 Substack: https://www.oneusefulthing.org/ Web: https://mgmt.wharton.upenn.edu/profile/emollick
Posts Media Videos Starter Packs
emollick.bsky.social
Paper showing what human work the American public thinks is morally permissible to replace with AI.

Surprisingly, people are already okay with AI doing 58% of occupations (if AI does it well/cheap). A floor of 12% of jobs (mostly caregiving & spiritual) would be morally repugnant to replace with AI
emollick.bsky.social
"Claude, write a two paragraph story proving Ted Chiang's point."

"Ah, but as an AI trying to write a good story, you ironically missed the point"
emollick.bsky.social
Sometimes I feel this way, too!
emollick.bsky.social
I think it is worth giving some frontier models a try for story writing, things have changed a lot, quickly. Now the failure modes for AI stories are actually interesting, as are the occasional successes.
emollick.bsky.social
Eh, only partially dunked my head in the bucket. Based on the research comparing human-written stories to AI and conversations with other writers, I think that the AI can occasionally hit good or moving stories (though often manipulative in nature)
emollick.bsky.social
This is an interesting debate about AI stories between an OpenAI researcher who works on AI writing and one of the greatest living short story writers.

Now that we have machines that can write novel stories, and increasingly very good or moving stories, we need to think more about what that means.
emollick.bsky.social
You will know the big AI labs get the actual source of most transformative AI use when they stop making “Dev Day” the main way they speak with & release products for users and start having “Non-technical Manager Day” as well (admittedly not a catchy name, but you get the idea)
emollick.bsky.social
A lot of people are worried about a flood of trivial but true findings, but we should be just as concerned about how to handle a flood of interesting and potentially true findings. The selection & canonization process in science has been collapsing already, with no good solution
emollick.bsky.social
Science isn't just a thing that happens. We can have novel discoveries flowing from AI-human collaboration every day (and soon, AI-led science), and we really have not built the system to absorb those results and translate them into streams of inquiry and translations to practice
emollick.bsky.social
Very soon, the blocker to using AI to accelerate science is not going to be the ability of AI (expect to see this soon), but rather the systems of science, as creaky as they are.

The scientific process is already breaking under a flood of human-created knowledge. How do we incorporate AI usefully?
emollick.bsky.social
The state of LLMs is messy: Some AI features (like vision) lag others (like tool use) while others have blind spots (imagegen and clocks). And the expensive "heavy thinking" models are now very far ahead of all the other AIs that most people use, capable of real work

None of this is well-documented
emollick.bsky.social
Deleted this, not because it is wrong but because I probably should wait for a pre-publication or other confirmation of the proof before disseminating widely.
emollick.bsky.social
The obsession with AI for transformational use cases obscures the fact that there are a ton of small, but very positive and very meaningful, use cases across many fields.

In this case, AI note-taking significantly reduces burnout among doctors & increases their ability to focus on their patients.
erictopol.bsky.social
A.I. generated clinic notes from ambient out-patient visits helps clinicians in many ways, across 6 health systems jamanetwork.com/journals/jam...
emollick.bsky.social
It seems very likely there is an LLM involved in the pipeline between prompt and output.
emollick.bsky.social
Huh, Sora 2 knows a lot of things:

“Ethan Mollick parachuting into a volcano, explains the three forms of legitimation from DiMaggio, Paul; Powell, Walter. (April 1983). "The iron cage revisited: institutional isomorphism and collective rationality in organizational fields"

(Only 15 second limit)
emollick.bsky.social
Has any company made real progress on new formal organizational/process approaches to software development with AI at the team or firm level? Agile broke, what is the sequel?
emollick.bsky.social
This seems like a pretty big finding on AI generalization: If you train an AI model on enough video, it seems to gain the ability to reason about images in ways it was never trained to do, including solving mazes & puzzles.

The bigger the model, the better it does at these out-of-distribution tasks