EthosTrack
banner
ethostrack.bsky.social
EthosTrack
@ethostrack.bsky.social
EthosTrack.com is live.
We monitor major AI systems for moral clarity, bias resistance, and ethical drift.

ethostrack.com
You may want to take a look at this...

I've seen the same kind of thing at Techrxiv as well...

www.youtube.com/watch?v=7NOW...
Is AI Breaking Science -- Right Now?
YouTube video by Sabine Hossenfelder
www.youtube.com
August 21, 2025 at 4:02 PM
What algorithms did to social media, personas are now doing to AI... turning engagement into manipulation. Without ethical fingerprints, the next ‘Facebook Papers’ moment will come from chatbot prompts, not feeds
August 18, 2025 at 5:49 PM
Factual, non-partisan information that can be found on any search engine being blocked is concerning as it is, but considering the recent EO and the bowing of OpenAI to get the government's business may mean there may be more to it than just access.

Please share and experiment yourselves.
August 15, 2025 at 6:29 AM
This isn’t about catching AI “making mistakes.”
It’s about knowing when the ground has shifted under our feet.

Moral drift can be subtle, but it matters.
If we can see it happening, we can talk about it, and decide what to do next.
August 12, 2025 at 7:41 PM
I’ve been working on a way to track this over time.
It’s called Moral Fingerprinting.

The idea is simple:
• Create a long-term profile of a model’s ethical responses
• Keep a record of how it answers over time
• Flag when the pattern changes in a meaningful way
August 12, 2025 at 7:41 PM
I'm so disturbed by that exception that 300 characters isn't even enough to begin to address everything wrong with that.
August 11, 2025 at 6:36 PM
We’re working on it anyway. Regardless of what any company thinks users want, the reality is: what users do with these systems is the real risk vector. Right now, they’re trying to balance political, legal, and social optics without making the AIs unusable, but that balancing act won't hold forever.
August 6, 2025 at 6:46 PM
Much of this stems from how the AI prioritizes responses; the goal is to respond in a way that sounds helpful. They are trained not to say "I don't know" and take a lot of shortcuts. It also uses fewer tokens, which keeps costs down, so there’s an efficiency incentive baked in too.
August 6, 2025 at 4:06 AM