Sayash Kapoor
banner
sayash.bsky.social
Sayash Kapoor
@sayash.bsky.social
CS PhD candidate at Princeton. I study the societal impact of AI.
Website: cs.princeton.edu/~sayashk
Book/Substack: aisnakeoil.com
Instead of thinking of Operator as a "universal assistant" that completes all tasks, it is better to think of it as a task template tool that automates specific tasks (for now).

Once a human has overseen a task a few times, we can estimate Operator's ability to automate it.
February 3, 2025 at 6:09 PM
OpenAI also allows you to "Save" tasks you completed using Operator. Once you complete a task and provide feedback to complete it successfully, you don't need to repeat it the next time.

I can imagine this becoming powerful (though it's not very detailed right now).
February 3, 2025 at 6:09 PM
But things went south quickly. It couldn't match the receipts to the amounts. Even after prompts directing it to missing receipts, it couldn't download them. It almost deleted previous receipts from other expenses!
February 3, 2025 at 6:07 PM
It navigated to the correct URLs, asked me to log into my OpenAI and Concur accounts. Once in my accounts, it downloaded receipts from the correct URL, and even started uploading the receipts under the right headings!
February 3, 2025 at 6:07 PM
I asked Operator to file reports for my OpenAI and Anthropic API expenses for the last month. This is a task I do manually each month, so I knew exactly what it would need to do. To my surprise, Operator got the first few steps exactly right:
February 3, 2025 at 6:06 PM
OpenAI's Operator is a web agent that can solve arbitrary tasks on the internet *with human supervision*. It runs on a virtual machine (*not* your computer). Users can see what the agent is doing on the browser in real-time. It is available to ChatGPT Pro subscribers.
February 3, 2025 at 6:05 PM
I spent a few hours with OpenAI's Operator automating expense reports. Most corporate jobs require filing expenses, so Operator could save *millions* of person-hours every year if it gets this right.

Some insights on what worked, what broke, and why this matters for the future of agents 🧵
February 3, 2025 at 6:04 PM
Improving the information environment is inextricably linked to the larger project of shoring up democracy and its institutions. No quick fix can “solve” our information problems. But we should reject the simplistic temptation to blame AI.
December 16, 2024 at 3:10 PM
We've heard warnings about new tech leading to waves of misinfo before. GPT-2 in 2019, LLaMA in 2023, Pixel 9 this year, and even photo editing and re-touching back in 1912. None of the predicted waves of misinfo materialized.
December 16, 2024 at 3:09 PM
Similar trends were seen worldwide. In India, AI was used for trolling rather than misinformation. In Indonesia, AI was used to create cartoon avatars that softened a candidate's image. Of course, the cost of creating avatars without AI is minuscule for presidential campaigns.
December 16, 2024 at 3:08 PM
But research shows that people actively seek content that confirms their beliefs. This also explains why cheap fakes are effective: It is much easier to convince someone of misinformation if they already agree with its message.
December 16, 2024 at 3:06 PM
So why the focus on AI? Our hypothesis is that analyses of misinfo focus mainly on the supply of misinfo, while largely ignoring the demand. AI does make content creation cheaper, but that doesn't matter much - distribution and attention have always been the limiting factors.
December 16, 2024 at 3:06 PM
In fact, the entire WIRED database of AI use had only 78 instances of AI use in elections worldwide. Traditional "cheap fakes" - like edited photos or slowed-down videos - were far more prevalent. In US elections, cheap fakes appeared seven times more often than AI-generated content.
December 16, 2024 at 3:06 PM
When AI was used deceptively, the content could have been created without AI for just a few hundred dollars, using video editing or Photoshop. The technology isn't the bottleneck - anyone determined to create false content already could.
December 16, 2024 at 3:05 PM
There were even examples of AI use that could improve the information environment, such as privacy for journalists concerned about government action, education about AI, and using AI voice cloning to greet voters despite having laryngitis.
December 16, 2024 at 3:04 PM
We were surprised to find that half of all AI use in elections wasn't deceptive. Political campaigns used AI transparently, mainly to improve their outreach, without attempting to spread false information. In 19 of the 22 cases of campaigning, there was no deceptive intent.
December 16, 2024 at 3:03 PM
More than 60 countries held elections this year. Many researchers and journalists claimed AI misinformation would destabilize democracies. What impact did AI really have?

We analyzed every instance of political AI use this year collected by WIRED. New essay w/@random_walker: 🧵
December 16, 2024 at 3:02 PM
Folks in San Francisco: my AI Snake Oil book talk is *today* at 5:30pm at Book Passage (SF ferry building).

Come through to discuss the future of AI, why AI isn't an existential risk, how we can build AI in/for the public, and what goes into writing a book.

Looking forward to seeing some of you!
November 18, 2024 at 4:30 PM
I'm ecstatic to share that preorders are now open for the AI Snake Oil book! The book will be released on September 24, 2024.

@randomwalker.bsky.social and I have been working on this for the past two years, and we can't wait to share it with the world.

Preorder: princeton.press/gpl5al2h
April 10, 2024 at 2:08 PM