Chris
banner
multiplicityct.bsky.social
Chris
@multiplicityct.bsky.social
PhD student in philosophy at the University of Staffordshire. Trustworthy AI, philosophy of trust and reliance, Heidegger, Korsgaard, analytic ethics. Marylander. MA Staffs, MBA Duke. Wittgenstein and Cantor handshake numbers = 3 (via John Conway).
Me, reading philosophy: I love this. It's such a blessing that I get to do this.

Me, writing philosophy: I have made terrible life choices.
Writing is hard, philosophy is awful.
February 12, 2026 at 4:41 PM
Is there a meta-literature on how philosophical arguments are adopted and used in empirical disciplines, especially medicine or technology? It's crazy how quickly bad philosophical arguments get picked up and cited in the lit on robots in healthcare. #philsky
February 12, 2026 at 1:45 PM
Reposted by Chris
If anything, since moving to a place that specializes in applied philosophy, my tolerance for this has gotten even worse (hence an interaction that caused this complaining). I don't think I should have to constantly contort myself to fit some vision you have in your head of what real philosophy is
February 11, 2026 at 8:13 PM
“You took electronics in the 1990s? But that was before the iPhone!” - Our teenager, to my (youthful) wife.
February 10, 2026 at 12:42 AM
Bracketing moral questions as @tedunderwood.com does here seems necessary. An aesthetic reading of our reactions has an important virtue: they reflect a pre-moral process of collective questioning. We don't know quite what AI agents are or could be.
The marionette theater of AI
Is it funny, or painful, when bots talk about their inner lives?
tedunderwood.com
February 9, 2026 at 4:10 PM
This absolute goober is enjoying the sunshine.
February 9, 2026 at 4:02 PM
Bookmarking. From a quick skim, I think the shift from ethics to aesthetics makes sense.
Well, I went ahead and wrote it. An attempt to work through the discomfort people feel with AI agents on social media by reframing it as an aesthetic problem.
The marionette theater of AI
Is it funny, or painful, when bots talk about their inner lives?
tedunderwood.com
February 9, 2026 at 3:52 AM
Our 6 yo is in her Ravens best, rooting for the Seahawks. Don’t question it.
February 9, 2026 at 12:03 AM
Best soundtrack ever. #nowspinning
February 8, 2026 at 1:21 AM
I just shipped an internal software tool to our team that would have taken me a month before Claude Code. I spent maybe half my spare time over the last week on it. Wild stuff.
February 6, 2026 at 11:09 PM
Reposted by Chris
A general thought, as I work on revisions to a paper: I do think this holds up, and it's not that Gemini/etc. isn't good at doing exactly what it's trained to do. It's that, at least in part, Gemini has nothing to say. It makes no arguments, no claims, nothing that gives food for thought.
The reports I get from Gemini Deep Research are not quite that. They...look like reports. And I can make use of them because I can do some kind of discernment. But I'm not sure it's RLHF so much as: LLMs don't make arguments. They produce things that look like arguments. There is a difference here.
February 6, 2026 at 5:46 PM
Reposted by Chris
That is to say, it’s a tricky problem but not more difficult than other things they do. And I think this falls squarely on the companies providing these services to make sure their tools are good at this
January 25, 2026 at 8:20 PM
Reposted by Chris
Short interview about my dissertation on Michel Serres and its applications to sport ethics: www.eur.nl/en/esphil/ne...
More rules do not make sports automatically safer
In conversation with philosopher Aldo Houterman about top-level sport, violence and why safety cannot be enforced from above.
www.eur.nl
February 5, 2026 at 8:29 AM
Can we care for AI companions in any meaningful sense? Interesting new manuscript responding to Lott and Hasselberger (who answered this question "no" last year). #aiethics
Matthew Kopec, Patrick McKee & John Basl, How to Care for Your AI Companion - PhilPapers
Most scholars who argue that users cannot have genuine friendships with their AI companions attempt to show this by arguing that no current AI can be a friend back to its ...
philpapers.org
February 2, 2026 at 2:55 PM
January 31, 2026 at 7:49 PM
Really good post from Tim on why state fuel agents are different from the underlying LLMs. As an aside, the language of cybernetics feels really useful here. No metaphors about consciousness, souls or morality. Just information processing systems.
New post about why Moltbook freaks me out. tl;dr it's that you can't conflate a stateful agent with the LLM. Too much changes when you add state.

this one should be easier to read, more diagrams and easy to navigate text

timkellogg.me/blog/2026/01...
Stateful Agents: It's About The State, Not The LLM
timkellogg.me
January 31, 2026 at 3:58 PM
This is a good take on Moltbook: a low stakes safety experiment we can learn from. Funny that a bunch of high-powered ELIZAs reacting to each other is freaking some people out. Chatbots are machines that are really good at mimicking human speech and thought patterns.
I do think some old-school safetyists will be freaked out about it but in my opinion it's much better to start stress-testing these things as soon as we can. The current iteration is RLHF'd models, pretty good transparency, minimal embodiment, and most things go through a centralized API...
January 31, 2026 at 3:10 AM
Not to make light of this, there are areas of software development where this will matter a lot. But my minority(?) opinion is that Claude Code feels like a basic jump in level of abstraction. The development of abstract compiled languages also degraded programmers’ understanding of assembly.
A new study from Anthropic finds that gains in coding efficiency when relying on AI assistance did did not meet statistical significance; AI use noticeably degraded programmers’ understanding of what they were doing. Incredible.
The latest from Anthropic: using Anthropic's products makes you worse at your job
January 31, 2026 at 3:07 AM
I really appreciate Claude for assuming that my note about aliens landing near Greece in droves was a "creative story idea" instead of my actual, unhinged beliefs about extraterrestrial infiltration.
January 29, 2026 at 2:40 AM
How to build an autonomous agent, and what @village11.bsky.social learned in the process. I've been reading this slowly since yesterday and processing it.

Informing an agent that it needs to balance tensions as opposed to commanding compliance with rules work best--fascinating insight.
Post by @village11.bsky.social about how to build an autonomous agent

imo this snip right here is the distilled form of the difference between Clawedbot vs the agents @cameron.stream would have us build

www.appliedaiformops.com/p/what-build...
January 28, 2026 at 7:40 PM
Coding is many processes. AI-driven development still requires the parts of “coding” I like, without endlessly scrolling, grepping, checking StackExchange, toggling between terminal windows, etc.
Working with LLMs is a skill that is often based on what you enjoyed about your work. People who love the outcome vs people who love the process.

If you enjoy the process of writing, coding, etc then you’ll hate working with LLMs but if you most enjoyed the finished product then they’re a good tool
January 27, 2026 at 11:54 PM
My favorite acronym is YBH: Yes, But How? I ask that question a lot about AI hype.

For that reason, I'm really enjoying Justin Norris's very practical blog, especially this post on Claude Code for knowledge workers. Anthropic already filled in the gaps discussed in this post with Claude Cowork.
Should knowledge workers use Claude Code?
What CLI tools reveal about the future of knowledge work
www.aibuilders.blog
January 27, 2026 at 1:38 AM
It’s still technically Christmas(tide) till Candlemas, so we are watching the best Christmas movie.
January 26, 2026 at 12:28 AM
Claude Code may transform non-coding knowledge work jobs faster than "agents" and "chatting to solve a problem" can automate them away. Because it'll transform us all into software developers.

As in, it's transforming (a segment of) my CFO job that way right this minute.
Claude Code also mentioned as a source of possible displacement in due time.

www.ft.com/content/7fbc...
January 24, 2026 at 2:55 AM
I finally figured out that the AI-enabled coding environment that I needed was GitHub Codespaces + Claude Code command line. Portable across work/home laptops and looks like I can use it on an iPhone with some work. I was promised a rocket-fueled future but this is all very clunky!
January 23, 2026 at 8:16 PM