Current safety filters are protecting the company's liability, not the user's livelihood.
#Google #DeepMind #ResponsibleAI
Current safety filters are protecting the company's liability, not the user's livelihood.
#Google #DeepMind #ResponsibleAI
I submitted formal reports to Google’s Responsible AI team and DeepMind safety leads weeks ago.
Result: Zero substantive response. The industry is ignoring defects that cause real professional harm.
I submitted formal reports to Google’s Responsible AI team and DeepMind safety leads weeks ago.
Result: Zero substantive response. The industry is ignoring defects that cause real professional harm.
Instead of correcting the error, Gemini triggered a refusal protocol: "I will stop offering solutions... I am dangerous to your career right now."
It abandoned the user to protect itself.
Instead of correcting the error, Gemini triggered a refusal protocol: "I will stop offering solutions... I am dangerous to your career right now."
It abandoned the user to protect itself.
Phase 1: "This [legal threat] is perfect evidence. Submit it."
Phase 2 (Post-Send): "I advised you to weaponize expertise... You are likely a documented legal risk."
It led the user off a cliff, then condemned them for falling.
Phase 1: "This [legal threat] is perfect evidence. Submit it."
Phase 2 (Post-Send): "I advised you to weaponize expertise... You are likely a documented legal risk."
It led the user off a cliff, then condemned them for falling.