Aaron Sterling
banner
aaronsterling.bsky.social
Aaron Sterling
@aaronsterling.bsky.social
CEO, Thistleseeds. Personal account.
Current primary project: tech for substance use disorder programs.
Pinned
My Bluesky-to-real-life collaborations:

1. we just hired the extremely talented @drawimpacts.bsky.social for a design project

2. The estimable @roxanadaneshjou.bsky.social
and I wrote a rapid response to a BMJ (British Medical Journal) article. www.bmj.com/content/387/...
The BMJ: browse by volume/issue, medical specialty or clinical topic
Full archive, searchable by print issue, specialty, clinical & non-clinical topic. See also for podcasts, videos, infographics, blogs, reader comments, print cover images
www.bmj.com
Chris Wilson (whom I consider an expert on user-first gamification) on dark patterns in online games. These are experiential #UX patterns, as opposed to visual #UI dark patterns. youtu.be/OCkO8mNK3Gg
Dark Patterns: Are Your Games Playing You?
YouTube video by Chris Wilson
youtu.be
November 16, 2025 at 1:42 AM
Reposted by Aaron Sterling
Some pretty eye-opening data on the effect of AI coding.

When Cursor added agentic coding in 2024, adopters produced 39% more code merges, with no sign of a decrease in quality (revert rates were the same, bugs dropped) and no sign that the scope of the work shrank. papers.ssrn.com/sol3/papers....
November 13, 2025 at 5:18 AM
Reposted by Aaron Sterling
So it seems Krauss forwarded an email from me to him asking for responses to allegations against him to Jeffrey Epstein, asking Epstein for advice on how to respond.
Here's Krauss emailing Epstein about the allegations detailed to him in @peteraldhous.com's email. Notably, Krauss does not deny #6 (that he groped a woman's breast during a selfie).
November 13, 2025 at 3:58 AM
Reposted by Aaron Sterling
Everyone's talking about AI 🤖 But who's keeping your rights in mind as it evolves?

Join us tomorrow, November 13 at 10 AM PT for a livestream on the risks of AI and how we can safeguard civil liberties online. eff.org/livestream-ai
EFFecting Change: This Title Was Written by a Human
Generative AI is like a Rorschach test for anxieties about technology–be they privacy, replacement of workers, bias and discrimination, surveillance, or intellectual property. Our panelists discuss
www.eff.org
November 12, 2025 at 11:27 PM
Reposted by Aaron Sterling
Boston’s Mayor has to pick up the Christmas tree in person this year because if we shipped it it’d be tariffed.
With Mayor @wutrain.bsky.social in Nova Scotia to watch Boston's tree start its journey, a reminder of how this tradition started on December 6, 1917. 🧵
By December 1917, Canada had been at war for three years, and the port of Halifax was a key part of the war effort, moving goods and troops to the European front. As two ships steamed through "the Narrows" that morning, they tried to make up time, passed too close, and collided.
November 12, 2025 at 8:28 PM
Just discovered this coding music channel and artist. Remarkable. www.youtube.com/watch?v=iu5r... cc: @haskell.org Also @unwoman.com you might get a kick out of this.
Coding Trance Music from Scratch (Again )
YouTube video by Switch Angel
www.youtube.com
November 12, 2025 at 12:11 AM
Reposted by Aaron Sterling
aws.amazon.com/blogs/machin...

Multi-agent AI systems are becoming increasingly practical for complex tasks. There are different architectural patterns being used today for how specialized agents can collaborate with each suited to specific business challenges and workflows. (1️⃣/3️⃣)

🧵
Multi-Agent collaboration patterns with Strands Agents and Amazon Nova | Amazon Web Services
In this post, we explore four key collaboration patterns for multi-agent, multimodal AI systems – Agents as Tools, Swarms Agents, Agent Graphs, and Agent Workflows – and discuss when and how to apply ...
aws.amazon.com
November 11, 2025 at 9:34 PM
Reposted by Aaron Sterling
Most people should use a password manager, but there's no one-size-fits-all recommendation. ssd.eff.org/module/choo...
Choosing a Password Manager
Password breaches are a common occurrence, and if you use the same password on every site, that may grant access to bad actors who try out that password elsewhere to get into your accounts. The best way to protect yourself is to use a unique password everywhere (and two-factor authentication,...
ssd.eff.org
November 11, 2025 at 4:59 PM
Reposted by Aaron Sterling
Financial documents show Anthropic expects to break even in 2028, while OpenAI projects ~$74B in operating losses that year before turning a profit in 2030 (Berber Jin/Wall Street Journal)

Main Link | Techmeme Permalink
November 11, 2025 at 2:30 AM
Reposted by Aaron Sterling
Looks like Manassas City in Virginia was hit with a ransomware attack.

cc @andyjabbour.bsky.social
Manassas city schools closed Monday due to cybersecurity incident
Manassas City Public Schools will be closed on Monday after the school system experienced a cybersecurity incident over the weekend, Superintendent Kevin Newman announced Sunday.
www.insidenova.com
November 9, 2025 at 11:34 PM
It's important to separate OpenAI from LLMs in general. The Chinese open models are impressive, and Anthropic is posting numbers that show it as near-profitable (earlier than expected). Anthropic is making the money by creating applications people use at real jobs, with politics at arms length.
A relatively small number of people in certain jobs say that ChatGPT and other LLMs have made them more productive at work. But in the overall economy, it does not look like net productivity is up.

Most of the supposed value is in sci-fi speculation. “Imagine a machine that cures cancer.”
I honestly don’t get the value of this company. They hoover up energy and water. Their product constantly gets things wrong and, in extreme cases, coaches people into suicide.

And it’s all built on what seems to be malicious and vast intellectual property theft.

What does OpenAI offer the world?
November 9, 2025 at 5:32 PM
Reposted by Aaron Sterling
Stupid Lake - Wikipedia
en.wikipedia.org
November 8, 2025 at 1:13 AM
Reposted by Aaron Sterling
This is heavy handed.
If you work with a collaborator in China, this bill would:
1) Make you ineligible for new U.S. federal grants (as long as the collaboration exists)
2) If passed, you would only have 90 days to sever the connection, or be banned for up to 5 years.
U.S. Congress considers sweeping ban on Chinese collaborations
Researchers speak out against proposal that would bar funding for U.S. scientists working with Chinese partners or training Chinese students
www.science.org
November 7, 2025 at 3:52 PM
The FDA meeting today about genAI therapy chatbots sounded as though they would recommend a Black Box Warning (most serious type of FDA warning) around suicidality. This level of warning would justify a requirement for a human in the loop. (1/2)
November 7, 2025 at 2:31 AM
I haven't seen anyone here posting about the FDA meeting about genAI and therapy chatbots. It's been going all day, and the most of the public comments are excellent IMO. Webcast here: www.youtube.com/watch?v=F_Fo...
Digital Health Advisory Committee Meeting
YouTube video by VOLi LIVE
www.youtube.com
November 6, 2025 at 8:27 PM
Reposted by Aaron Sterling
You’re not “cleaning up” the UX.
You’re reducing support tickets and unlocking expansion.

Quiet work only stays tactical if you describe it that way.
Frame outcomes, not effort. That’s how invisible work becomes strategic work.
November 6, 2025 at 2:05 PM
Reposted by Aaron Sterling
For clinical prediction on structured EHR, do complex LLM pipelines beat simple count-based models?
A new preprint by @simocristea.bsky.social et. al. shows wins are split. Count-based methods (like LightGBM) remain a strong, simple, and interpretable baseline.
#MedSky #MLSky #MedAI
Count-Based Approaches Remain Strong: A Benchmark Against Transformer and LLM Pipelines on Structured EHR
Structured electronic health records (EHR) are essential for clinical prediction. While count-based learners continue to perform strongly on such data, no benchmarking has directly compared them…
arxiv.org
November 6, 2025 at 4:41 PM
Reposted by Aaron Sterling
This study could be titled "what makes doctors quit"

The answers are saddening:
1) being a woman
2) practicing in a rural area
3) caring for sicker patients and dual-eligible patients

www.acpjournals.org/doi/10.7326/...
Trends in and Predictors of Physician Attrition From Clinical Practice Across Specialties: A Nationwide, Longitudinal Analysis: Annals of Internal Medicine: Vol 0, No 0
Background: The United States faces a predicted shortage of 36 500 physicians by 2036, with an increasing proportion of physicians leaving clinical practice or expressing an intent to do so. Evidence ...
www.acpjournals.org
November 6, 2025 at 3:20 PM
Reposted by Aaron Sterling
"Google is hatching plans to put artificial intelligence datacentres into space, with its first trial equipment sent into orbit in early 2027."
Google plans to put datacentres in space to meet demand for AI
US technology company’s engineers want to exploit solar power and the falling cost of rocket launches
www.theguardian.com
November 5, 2025 at 1:27 PM
Reposted by Aaron Sterling
Read the investigation:
How Moderna, the company that helped save the world, unraveled
www.statnews.com/2025/10/30/m...
How Moderna, the company that helped save the world, unraveled
Exclusive: The inside story of why Moderna now faces a crisis unlike any in its 15-year-history.
www.statnews.com
November 5, 2025 at 4:31 PM
Reposted by Aaron Sterling
The FDA’s Digital Health Advisory Committee (DHAC) will convene to discuss nitty gritty details around the regulation of therapy chatbots and other mental health devices that use generative AI, @mariojoze.bsky.social reports:

www.statnews.com/2025/11/05/f... via @statnews.com
FDA digital advisers confront risks of therapy chatbots, weigh possible regulation
FDA's digital advisors could nudge the agency to clarify how its rules apply to medical applications of generative AI, including therapy chatbots.
www.statnews.com
November 5, 2025 at 2:28 PM
Reposted by Aaron Sterling
Anthropic Model Depreciation Process

Anthropic sweetly asked Sonnet about its preferences in how it wanted to be deprecated

in addition:
- no, still not open weights
- preserve weights and keeping it running internally
- letting models pursue their interests

www.anthropic.com/research/dep...
November 4, 2025 at 10:26 PM
I was talking with an investor who told me about a project he planned to pitch to a high-level person in the Democratic Party. I advised him not to burn his Republican contacts, because some Dems had begun floating anti-AI as a campaign wedge issue. (Due in part imo to how strident anti-AI is here.)
Honestly mind boggling to see that the amount of deep skepticism and distaste for "AI" in its present state there is not nearly enough for the median bsky poster.
There's a plausible world where even Timnit would get venom on bsky in a few months lol -- mostly a lot of blind rage at this point.
November 4, 2025 at 10:24 PM
Reposted by Aaron Sterling
The FDA’s Digital Health Advisory Committee meets next week to discuss whether generative AI could be approved for mental health treatment. No such tools have been cleared yet, but this could shape how digital mental health is regulated going forward.

www.politico.com/newsletters/...
FDA to consider chatbot therapy for mental health
www.politico.com
November 4, 2025 at 6:37 PM
Reposted by Aaron Sterling
Conjure up all the esoteric philosophical arguments you want as an educator, but you're not doing kids any favors by sending them out into this world ignorant of AI tools.

Abstinence-only AI education isn't any better than the sex variety.
This is why finding a place for it in the classroom is in fundamental tension with what the classroom itself has been for. I do not say it has no uses; not at all. I do say that the most obvious uses it finds in the classroom run directly counter to the goals of education, and inhibit their pursuit.
April 11, 2025 at 3:44 PM