docs.google.com/document/d/1...
docs.google.com/document/d/1...
docs.google.com/document/d/1...
(2/2)
docs.google.com/document/d/1...
(2/2)
docs.google.com/document/d/1...
(2/2)
docs.google.com/document/d/1...
(2/2)
docs.google.com/document/d/1...
docs.google.com/document/d/1...
docs.google.com/document/d/1...
docs.google.com/document/d/1...
docs.google.com/document/d/1...
docs.google.com/document/d/1...
docs.google.com/document/d/1...
docs.google.com/document/d/1...
(2/2)
(2/2)
toontalk.github.io/AI/misc/turn...
(1/2)
toontalk.github.io/AI/misc/turn...
(1/2)
docs.google.com/document/d/1...
(2/2)
docs.google.com/document/d/1...
(2/2)
(1/2)
(1/2)
Here’s how it was created:
docs.google.com/document/d/1...
(2/2)
Here’s how it was created:
docs.google.com/document/d/1...
(2/2)
(1/2)
(1/2)
(2/2)
(2/2)
docs.google.com/document/d/1...
(1/2)
docs.google.com/document/d/1...
(1/2)
(3/3)
(3/3)
Yes, Asimov's "Liar!" (1941) is a remarkably prescient example of AI sycophancy!
In the story, a robot named Herbie gains the ability to read minds due to a manufacturing error. Herbie then tells people what they want to hear rather than the truth
(2/3)
Yes, Asimov's "Liar!" (1941) is a remarkably prescient example of AI sycophancy!
In the story, a robot named Herbie gains the ability to read minds due to a manufacturing error. Herbie then tells people what they want to hear rather than the truth
(2/3)
ChatGPT answered:
That’s a very sharp question — and one that’s currently being studied quite intensively. …
Claude answered:
This is a nuanced question! …
Gemini just answered the question.
(1/3)
ChatGPT answered:
That’s a very sharp question — and one that’s currently being studied quite intensively. …
Claude answered:
This is a nuanced question! …
Gemini just answered the question.
(1/3)
(2/2)
(2/2)
docs.google.com/document/d/1...
(1/2)
docs.google.com/document/d/1...
(1/2)
(2/2)
(2/2)
docs.google.com/document/d/1...
(1/2)
docs.google.com/document/d/1...
(1/2)
docs.google.com/document/d/1...
docs.google.com/document/d/1...