Most of that office was cut today. (No idea if they're gonna keep the bureau.)
Most of that office was cut today. (No idea if they're gonna keep the bureau.)
And if you’re part of an organization that could make use of my expertise in tech, policy or investigations, I’d love to hear from you. I’m geoffreyfowler.88 on Signal.
And if you’re part of an organization that could make use of my expertise in tech, policy or investigations, I’d love to hear from you. I’m geoffreyfowler.88 on Signal.
I walked away more worried — not more informed.
My full @washingtonpost.com column here (gift link): wapo.st/49GEASP
I walked away more worried — not more informed.
My full @washingtonpost.com column here (gift link): wapo.st/49GEASP
Anthropic’s Claude also now lets you import Apple Watch data. It graded me a C — using many of the same shaky assumptions.
Both bots say they’re “not doctors.” But that isn’t stopping them from providing personal health analysis.
That disconnect is the real danger.
Anthropic’s Claude also now lets you import Apple Watch data. It graded me a C — using many of the same shaky assumptions.
Both bots say they’re “not doctors.” But that isn’t stopping them from providing personal health analysis.
That disconnect is the real danger.
His view: “This is not ready for any medical advice.”
The bot leaned heavily on Apple Watch VO₂ max estimate—which independent studies show can run ~13% low on average—and treated fuzzy metrics like hard facts.
His view: “This is not ready for any medical advice.”
The bot leaned heavily on Apple Watch VO₂ max estimate—which independent studies show can run ~13% low on average—and treated fuzzy metrics like hard facts.
When I asked it the same heart-health question repeatedly, its analysis changed. My grade bounced back and forth between F and a B.
Same data, same body. Different answers.
When I asked it the same heart-health question repeatedly, its analysis changed. My grade bounced back and forth between F and a B.
Same data, same body. Different answers.
* target you with ads
* manipulate you
* train their AI
* potentially be accessed by lawyers or governments
* target you with ads
* manipulate you
* train their AI
* potentially be accessed by lawyers or governments
www.washingtonpost.com/technology/i...
www.washingtonpost.com/technology/i...
And its realism is getting to a level that raises serious concerns about becoming an “misinformation superspreader."
And its realism is getting to a level that raises serious concerns about becoming an “misinformation superspreader."
It missed our test cut-off, but I checked the same prompts again and … it still couldn’t beat Gemini. Here it removed someone from a photo, but left phantom fingers on Kristen Stewart’s side.
It missed our test cut-off, but I checked the same prompts again and … it still couldn’t beat Gemini. Here it removed someone from a photo, but left phantom fingers on Kristen Stewart’s side.
The judges gave it high scores for realism, but a zero for ethics.
(Neither Google nor AP answered our questions about whether it had rights to train on AP pictures.)
The judges gave it high scores for realism, but a zero for ethics.
(Neither Google nor AP answered our questions about whether it had rights to train on AP pictures.)