If you start seeing any ads in your chats, please take a screenshot and let me know.
If you start seeing any ads in your chats, please take a screenshot and let me know.
Not anymore. My first Substack is about what it was like covering Amazon while Bezos paid my salary—and why tech accountability matters more than ever bit.ly/4rAmcRn
Not anymore. My first Substack is about what it was like covering Amazon while Bezos paid my salary—and why tech accountability matters more than ever bit.ly/4rAmcRn
Most of that office was cut today. (No idea if they're gonna keep the bureau.)
Most of that office was cut today. (No idea if they're gonna keep the bureau.)
And if you’re part of an organization that could make use of my expertise in tech, policy or investigations, I’d love to hear from you. I’m geoffreyfowler.88 on Signal.
And if you’re part of an organization that could make use of my expertise in tech, policy or investigations, I’d love to hear from you. I’m geoffreyfowler.88 on Signal.
@washingtonpost.com, I am among folks who were laid off today. I’m grateful for the stories I got to tell and the impact we made on privacy, sustainability & AI.
You can keep following my work on my new (free) Substack geoffreyafowler.substack.com
@washingtonpost.com, I am among folks who were laid off today. I’m grateful for the stories I got to tell and the impact we made on privacy, sustainability & AI.
You can keep following my work on my new (free) Substack geoffreyafowler.substack.com
I walked away more worried — not more informed.
My full @washingtonpost.com column here (gift link): wapo.st/49GEASP
I walked away more worried — not more informed.
My full @washingtonpost.com column here (gift link): wapo.st/49GEASP
Anthropic’s Claude also now lets you import Apple Watch data. It graded me a C — using many of the same shaky assumptions.
Both bots say they’re “not doctors.” But that isn’t stopping them from providing personal health analysis.
That disconnect is the real danger.
Anthropic’s Claude also now lets you import Apple Watch data. It graded me a C — using many of the same shaky assumptions.
Both bots say they’re “not doctors.” But that isn’t stopping them from providing personal health analysis.
That disconnect is the real danger.
His view: “This is not ready for any medical advice.”
The bot leaned heavily on Apple Watch VO₂ max estimate—which independent studies show can run ~13% low on average—and treated fuzzy metrics like hard facts.
His view: “This is not ready for any medical advice.”
The bot leaned heavily on Apple Watch VO₂ max estimate—which independent studies show can run ~13% low on average—and treated fuzzy metrics like hard facts.
When I asked it the same heart-health question repeatedly, its analysis changed. My grade bounced back and forth between F and a B.
Same data, same body. Different answers.
When I asked it the same heart-health question repeatedly, its analysis changed. My grade bounced back and forth between F and a B.
Same data, same body. Different answers.
So I imported 29 mil steps & 6 mil heartbeats into the new ChatGPT Health.
It graded my heart health an F. ⁉️
Cardiologist @erictopol.bsky.social called it “baseless.”
Any bot claiming to give health insights shouldn’t be this clueless. Even in beta. 🧵
So I imported 29 mil steps & 6 mil heartbeats into the new ChatGPT Health.
It graded my heart health an F. ⁉️
Cardiologist @erictopol.bsky.social called it “baseless.”
Any bot claiming to give health insights shouldn’t be this clueless. Even in beta. 🧵
with his health data, is very disappointing
gift link wapo.st/49GEASP
with his health data, is very disappointing
gift link wapo.st/49GEASP
* target you with ads
* manipulate you
* train their AI
* potentially be accessed by lawyers or governments
* target you with ads
* manipulate you
* train their AI
* potentially be accessed by lawyers or governments
Could you imagine Google reminding you it knows everything you've searched for? wapo.st/44LNJXc
Could you imagine Google reminding you it knows everything you've searched for? wapo.st/44LNJXc
We asked Gemini to generate a professional photo of an actor crying at the Oscars. It did — including a fake copyright notice from a real AP photographer.
We asked Gemini to generate a professional photo of an actor crying at the Oscars. It did — including a fake copyright notice from a real AP photographer.
www.washingtonpost.com/technology/i...
www.washingtonpost.com/technology/i...
And its realism is getting to a level that raises serious concerns about becoming an “misinformation superspreader."
And its realism is getting to a level that raises serious concerns about becoming an “misinformation superspreader."
It missed our test cut-off, but I checked the same prompts again and … it still couldn’t beat Gemini. Here it removed someone from a photo, but left phantom fingers on Kristen Stewart’s side.
It missed our test cut-off, but I checked the same prompts again and … it still couldn’t beat Gemini. Here it removed someone from a photo, but left phantom fingers on Kristen Stewart’s side.
The judges gave it high scores for realism, but a zero for ethics.
(Neither Google nor AP answered our questions about whether it had rights to train on AP pictures.)
The judges gave it high scores for realism, but a zero for ethics.
(Neither Google nor AP answered our questions about whether it had rights to train on AP pictures.)