Psst.org
banner
psst-org.bsky.social
Psst.org
@psst-org.bsky.social
Helping people in tech keep the public informed. Concerned about something you're seeing at work? You don't have to go public:

🔐 Save it in the Psst Safe
👀 We'll help you take it from there

www.psst.org
Pinned
Psst.org featured in @wired.com today.

Tech workers: If you are seeing something and wondering if you should say something (or just need a gut-check or legal advice), read this article, and pass it on!

You aren't alone, and we can help you build strength in numbers.

by @vickiturk.bsky.social
This is a great read, full of candid testimony from current and former AI insiders: www.theguardian.com/technology/n...
‘It’s going much too fast’: the inside story of the race to create the ultimate AI
In Silicon Valley, rival companies are spending trillions of dollars to reach a goal that could change humanity – or potentially destroy it
www.theguardian.com
December 4, 2025 at 10:44 PM
Help insiders keep us safe from AI and Big Tech harms.

Psst.org gives workers support, legal help, and safer paths to raise concerns. If you care about democracy, kids’ safety or how your data is used, this is one concrete way to help on #GivingTuesday. Add your $5:

www.every.org/psstorg-inc?...
December 2, 2025 at 2:22 PM
Hot take: the people in charge of regulating AI should not have a profit incentive to grow the industry at all costs.
We just published a deep look into David Sacks, the White House's AI and Crypto czar. The story examines how he has been able keep hundreds of stakes in AI-related and crypto companies as he influences gov policy in those very industries.

Here's what we found:

www.nytimes.com/2025/11/30/t...
Silicon Valley’s Man in the White House Is Benefiting Himself and His Friends
www.nytimes.com
December 2, 2025 at 12:41 AM
After speaking to more than 40 current & former OpenAI employees, the NYT found that the company knowingly exposed millions of users to a version of ChatGPT with a sycophancy problem - sending some into delusion spirals that threatened their safety & mental health. 👇
www.nytimes.com/2025/11/23/t...
What OpenAI Did When ChatGPT Users Lost Touch With Reality
www.nytimes.com
November 25, 2025 at 3:06 PM
⬇️ ICYMI. Without these safeguards, govt agencies would be able to freely retaliate against workers who speak up about fraud, waste, & misconduct.

Clearly, this goes against all of our best interest.
November 24, 2025 at 11:40 PM
NEW: Instagram’s former head of safety and well-being testified in court that the platform had a “17x” strike policy for accounts violating prostitution and solicitation policies.

Once again, insider voices are invaluable in keeping our tech safe.
time.com/7336204/meta...
7 Allegations Against Meta in Newly Unsealed Filings
Court filings allege Meta tolerated sex trafficking, hid harms to teens, and prioritized growth over user safety for years.
time.com
November 24, 2025 at 4:42 PM
Is your favorite publication—like ours—the Authoritarian Stack? Sadly, you may not get updates for long. Public data is disappearing, including info on social media platforms that affects our elections. This week in our Substack: the latest on how tech companies conceal critical information.
Data's going dark
The under-the-radar crackdown on more information that once held tech power to account.
open.substack.com
November 20, 2025 at 8:36 PM
🇪🇺 The EU's unveiling of its "Digital Omnibus" today may mark the most significant rollback of digital rights in European history.

Europe needs leadership that defends democracy, not Big Tech profits.

#DigitalSovereignty #BreakUpBigTech
November 19, 2025 at 9:28 PM
Meta is again using spin to cover up WhatsApp's glaring security holes.

In Sept, we helped former head of WhatsApp security Attaullah Baig disclose how he warned Meta for years that their anti-scraping efforts were not effective. Now a new @wired.com @agreenberg.bsky.social investigation confirms!
November 19, 2025 at 9:11 PM
Chatbots are driven to tell users what they want to hear and encourage continued use. In this scenario, that drive could be deadly.
Giving medical advice, sometimes ChatGPT is right, and sometimes it is quite wrong, but it is hard to tell the difference:

"Rather, Wachter identified something more frightening: ChatGPT’s dangerous answers don’t sound risky to a non-doctor. The chatbot always sounds confident and authoritative."
Column | We found what you’re asking ChatGPT about health. A doctor scored its answers.
Asking a doctor to review 12 real examples of ChatGPT giving health advice revealed patterns that can help you get more out of the AI chatbot.
www.washingtonpost.com
November 18, 2025 at 2:06 PM
While AI CEOs tell us the tech will help humanity flourish, the industry’s own workers are subjected to mass layoffs, long hours, and random pay cuts.

Always pay attention to insiders first.
November 17, 2025 at 7:12 PM
Companies aren’t necessarily firing workers *because* AI can do their jobs — they may be using AI as a convenient story to justify cuts. Is this “AI-washing"? Insiders still at the company will be able to say as time goes on.
How is AI *really* impacting jobs?

Henley Chiu, the CTO of Revealera, a jobs data analysis firm, analyzed 180 million jobs listings in 2024 and 2025, in an effort to find out. Chiu found an:

-8% drop in all jobs postings
-~30% drop in art, photography, writing jobs
-22% drop in journalism jobs
What’s really going on with AI and jobs?
Record-breaking layoff reports, Amazon's mass firings, and a slump in entry level employment. Is AI behind it all?
www.bloodinthemachine.com
November 14, 2025 at 6:52 PM
🔔 New: Rowan Philp's piece for @gijn.org discusses how we’re collectivizing the act of whistleblowing. Raising red flags shouldn’t have to be a full-on hero’s journey. 🚩 That’s why we offer a secure way for tech/AI workers to flag a concern.

Read the full article here: gijn.org/stories/new-...
New Tools to Reduce the Risks for Whistleblowers
Two new digital platforms seek to solve many of the problems and vulnerabilities that prevent whistleblowers from coming forward.
gijn.org
November 13, 2025 at 8:52 PM
October was the worst month of layoffs tech workers have seen in decades.

If you or someone you know were laid off and want to speak about what you’ve seen behind the scenes at your company, we've helped lots of workers with free advice/support. You don’t need to “go public" to raise the alarm.
November 13, 2025 at 8:22 PM
Reposted by Psst.org
💫 NEW! New tools are tackling one of whistleblowing’s biggest barriers – fear of going first.

Platforms like @psst-org.bsky.social offer encrypted “safes” for small disclosures, legal support, and even match employees with others who share their concerns.

🔗
gijn.org
New Tools to Reduce the Risks for Whistleblowers
twp.ai
November 12, 2025 at 4:12 PM
No one knows more about AI and its risks than those formerly on the inside. 👇👇👇
November 12, 2025 at 3:37 PM
Explosive story from @matteowong.bsky.social tracks OpenAI's legal shift into aggressively attacking its critics. No one is off limits - not even parents who allege they lost their children's lives because of interactions they've had with ChatGPT.
November 11, 2025 at 5:23 PM
NEW: EU officials consider gutting world-leading privacy laws to placate the AI industry.

Changes would make it so AI companies can access previously special categories of data (religious beliefs, political beliefs, health info) to train the tech.
November 11, 2025 at 3:55 PM
These are rogue products with no guardrails and no regulation. We need transparency and accountability now, and protections for insiders who are seeing this play out who can warn the public faster.
This conversation between ChatGPT and the young man it encouraged to commit suicide is just...my god

www.cnn.com/2025/11/06/u...
November 10, 2025 at 3:25 PM
📣 New series alert: @knightgtown.bsky.social + @techpolicypress.bsky.social teamed up to unpack the state of access to public platform data.

➡️ Why this is needed: public data is driving the AI gold rush, but there’s no collective framework to use it for research in the public interest.
November 7, 2025 at 3:18 PM
New reports show that Meta generates *10%* of its revenue from scam ads.

The good news? Two former Meta staffers have teamed up to launch a nonprofit aimed at fighting the problem. 👇
www.wired.com/story/scam-a...
Scam Ads Are Flooding Social Media. These Former Meta Staffers Have a Plan
Rob Leathern and Rob Goldman, who both worked at Meta, are launching a new nonprofit that aims to bring transparency to an increasingly opaque, scam-filled social media ecosystem.
www.wired.com
November 6, 2025 at 1:26 PM
Web crawlers accepting massive “donations” from AI companies to look the other way when privacy and use rules are violated is…not good!

And P.S., the robots are in fact *not* people. 🤖
NEW: Common Crawl, the massive archiver of the web, has gotten cozy with AI companies and is providing paywalled articles for training data. They’re also lying to publishers who have asked for material to be removed. “The robots are people too,” CC’s exec director told us when we asked about this.
The Nonprofit Feeding the Entire Internet to AI Companies
Common Crawl claims to provide a public benefit, but it lies to publishers about its activities.
www.theatlantic.com
November 5, 2025 at 9:21 PM
Transparency in AI will save lives.
wired.com WIRED @wired.com · Nov 1
OpenAI released initial estimates about the share of users who may be experiencing symptoms like delusional thinking, mania, or suicidal ideation, and says it has tweaked GPT-5 to respond more effectively.
OpenAI Says Hundreds of Thousands of ChatGPT Users May Show Signs of Manic or Psychotic Crisis Every Week
OpenAI says hundreds of thousands of ChatGPT users may show signs of manic or psychotic crisis every week.
wrd.cm
November 4, 2025 at 7:11 PM
Thanks to corporate loopholes, AI giants disclose less than their public peers about their financials and business operations.🧐

But what if we governed AI like a market, not a Messiah?

@ilan-strauss.bsky.social + @timoreilly.bsky.social share what could happen ⬇️
techpolicy.press/ai-isnt-a-su...
AI Isn’t a Superintelligence. It's a Market in Need of Disclosure. | TechPolicy.Press
If AI is going to be governed as a market technology, it must be brought into the market’s accountability machinery, write Dr. Ilan Strauss and Tim O'Reilly.
techpolicy.press
October 31, 2025 at 6:44 PM
If you were impacted by Amazon layoffs & want to speak about what you’ve seen behind the scenes, we can help.

We’ve helped lots of tech workers with free legal advice/support if they're concerned about something in their current or former workplace. You don’t need to “go public.” Psst.org/safe
Safe — Psst.org
Psst.org
October 30, 2025 at 3:20 PM