Dr Grace, Amidst Monsters
@seifely.bsky.social
95 followers 1.2K following 20 posts
Artist. Professional. AI PhD/Psych MSc. I like dogs, dry wit and dice-based tabletop games. Hiker and cat parent 🌸 Bi/Poly, she/her.
Posts Media Videos Starter Packs
seifely.bsky.social
If it's the same pathway as we use to evaluate other people (even to a partial degree) then is it all that reliable? Is our interaction with it biased and flawed in new ways? (Yes). (Also I'm just going to start saying The Vibes Are Off every time Excel does something unexpected and upsetting).
seifely.bsky.social
... To have to build up an impression of something shapeless through multiple interactions to determine your sense of accuracy, your trust in it, desire to use and usefulness of a technology is uniquely weird. Uniquely supersticious. Activating a different type of mental analysis.
seifely.bsky.social
Wondering if there is any quantifiably difference in the social tone of these evaluations to other software. It clearly links to model transparency and black-boxiness, but I only ever use "feel" to evaluate a UI. I reckon this is a bit deeper than that - sure, text comm is the UI here, but ...
seifely.bsky.social
Even more interesting is evaluation beyond benchmarks. I've seen a bunch of responses to the model release that highlight just how curious the language is around model attachment. "The vibes are off", "responses feel so different", "feels weird/jagged", "I can sense it was trained on safe responses"
seifely.bsky.social
But of course, why would anyone trust a for-profit service provider? Ever? (Even one who is ostensibly not so?)
seifely.bsky.social
Trusting the provider to provide a good experience tailored for you by choice of model best suited to each query, even mid-conversation, should theoretically be the best end-user case (though people will still be picky and superstitious and think they know best).
seifely.bsky.social
Second #AI thinky thought of the week! I'm not particularly read up on GPT-5's release yet but it seems interesting as a potential step towards a general-use agent through model swapping and how upset that is making users. It feels a lot more like a traditional SaaS experience given the complaints.
seifely.bsky.social
Anyway, no conclusions here just yet. Just interesting thoughts about the specificity of this question given the uniqueness of the context. Children talking about their fears to teddies and learning to feel could be a fun comparator (though Teddy isn't owned by Capitalist Megacorp, of course). 🐻
seifely.bsky.social
And friends, too, to be honest. Romantic relationships are a unique thing of their own, I think, & probably warrants separate analysis. The automatic trust involved with an artificial system in any relationship style is also super interesting! I suppose it's the lack of threat that enhances bonding.
seifely.bsky.social
Anthropic suggest that Claude "rarely pushes back" in counselling-style conversations (which aligns with that article about Replika etc., but interacts interestingly with ChatGPT's recent sycophancy issues). But anyone who's engaged with therapy genuinely knows that's half the point of a counsellor.
seifely.bsky.social
I don't think it's as easy to define affective skill improvement or degradation as it is coding or writing capacity. Even with an artificial and unrealistic conversational partner, can this be translated into social improvement? Especially if a more stable emotional state is achieved generally?
seifely.bsky.social
I'm also thinking about this "lonely" portion of users. Those on Reddit talking about AI companionship saying that they feel warm and fuzzy each day from even thinking about using the system. Are these interactions clearly improving or degrading affective skills?

www.reddit.com/r/lonely/com...
From the lonely community on Reddit
Explore this post and more from the lonely community
www.reddit.com
seifely.bsky.social
My queries with this are around benefits to users, surprisingly. We're beginning to outline better human-authentic skill decline with greater AI use, but so far I don't think anyone has considered the bleed effects of companionship use. And I'm not just thinking about misvalidation of delusions...
seifely.bsky.social
That line's getting blurrier with the introduction of Grok's companion mode - "I'm clocking off for the day, let's let the Misa Misa skin for my agent out of her box". You can even automate the growth of your relationship, if you need her to take her clothes off ASAP.

vchavcha.com/en/free-reso...
Grok AI Companion Ani Complete Guide to Affection and Interactions - vchavcha.com
Elon Musk’s AI startup xAI has launched “Companion Mode” for its chatbot Grok, featuring virtual avatars for a more immersive and interactive experience. The most popular character, Ani, resembles Mis...
vchavcha.com
seifely.bsky.social
- just a month after Anthropic were discussing how affective interactions with Claude were supposedly less than 3%.

www.anthropic.com/news/how-peo...

Of course, products like Replika and Character AI are sold on a largely different premise to Claude. They're specifically for personal discourse.
How people use Claude for support, advice, and companionship
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
www.anthropic.com
seifely.bsky.social
First brain thinky thoughts of the week are around barriers to self-reporting and #AI companionship. 🧠

Anecdotal articles on this are prime clickbait/tabloid material so are difficult to navigate. I found it interesting that the guardian ran this one last month

www.theguardian.com/tv-and-radio...
‘I felt pure, unconditional love’: the people who marry their AI chatbots
The users of AI companion app Replika found themselves falling for their digital friends. Until – explains a new podcast – the bots went dark, a user was encouraged to kill Queen Elizabeth II and an u...
www.theguardian.com
seifely.bsky.social
I'm going to be posting every day this week (commitment!) to break the ice, because I've been avoiding posting here ever since I migrated and enough is enough. I've built up a bank of thoughts about recent AI goings-on and I need to let them out! I may even review some papers, too. ☺️✌️
seifely.bsky.social
Right now I'm working for the NHS and I currently have two very sweet partners, @theeuphemism.bsky.social and @jmfgd.bsky.social. I also have a cat named after a naughty elf from the Silmarillion...a subject (elves) I particularly like to draw over at my art account (@morgul.bsky.social). 🎨
seifely.bsky.social
Alright, alright, I'm finally doing it. I'm posting!

If you don't know me, I'm Grace, I have a doctorate in AI (in which I mostly talked about feelings) and a background in a whole mix of things from psychology to robotics. I'm pretty confident everyone here knows me since I moved from Twitter...
seifely.bsky.social
He started ranting about being the "comely" woman in the pub, I feel like I wasn't communicating properly...