Zachary K Stine
@zacharykstine.bsky.social
400 followers 600 following 300 posts
Asst prof of computer science interested in computational methods for the study of language and culture.
Posts Media Videos Starter Packs
Pinned
Reposted by Zachary K Stine
bayesianboy.bsky.social
What problem is explainability/interpretability research trying to solve in ML, and do you have a favorite paper articulating what that problem is?
zacharykstine.bsky.social
The biggest limitation to this is that all of this hermeneutics stuff takes place in the math/comp domain without translating it back into the linguistic/cultural. But that’s where we are looking next. I suspect we will have to learn how to read networks/geometries/etc as text.

5/5
zacharykstine.bsky.social
I’m skeptical that performance measures/benchmarks will be able to help us reason about model hermeneutics unless we can make their own interpretive lenses visible. Our framework offers a way to do this by contrasting a benchmark, as a modeling decision, against alternatives.

4/5
zacharykstine.bsky.social
This is what we attempt to do in the paper. We start with a very simple theory of semantic models based on the distributional hypothesis and Saussure. From that we develop a theory of model semantics in which models themselves function as signifiers.

3/5
zacharykstine.bsky.social
But to contrast a model against alternatives requires a theory of what exactly they measure that is sufficiently rigorous so as to motivate a measure of the semantic differences between models (without reducing semantics to a narrow task).

2/5
zacharykstine.bsky.social
I’ve been preparing a presentation on this paper and trying to more concisely get the main point across. I’d say it comes down to this:

To make the interpretive lens of a particular modeling decision visible, you have to contrast it against a population of alternative decisions.

1/5
Reposted by Zachary K Stine
atwilliams.bsky.social
if attention mechanisms in transformers create graphlike transformations across text then it is actually an incredible pun that groups of sentences are called "para-graphs"
Reposted by Zachary K Stine
t-shoemaker.bsky.social
Extremely good stuff from Leif here
leifw.bsky.social
i was on Disintegrator podcast, a really awesome conversation
podcasts.apple.com/us/podcast/3...
Reposted by Zachary K Stine
devezer.bsky.social
i think regardless of who has authored a paper, we (the scientific community) can and should stop criticizing the work on lack of perceived novelty. we can do better than perpetuating harmful myths about individual scientific papers having to present a discovery no one has ever thought of before.
zacharykstine.bsky.social
Hey thanks! Definitely going to submit elsewhere. But it’s hard because this is all very specific to a subfield of a subfield that has like 3 publication venues. It seems like the natural home for this work but I’m second-guessing that.
zacharykstine.bsky.social
“Oh, you’re bringing up epistemology? Name every philosopher of science.”
zacharykstine.bsky.social
But I’m glad to know that The Structure of Scientific Revolutions lays out a case of how supervised learning can easily mask cultural complexity in computational humanities work. Shame we’re still falling short on such an old and obvious problem.
zacharykstine.bsky.social
Sadly, we limited ourselves to C. S. Peirce, Terrence Deacon (by way of Eduardo Kohn) and a bunch of current phil sci that is actually about ML.
zacharykstine.bsky.social
Jean Baudrillard, Umberto Eco, Thomas Kuhn, Paul Feyerabend, Ian Hacking, Bruno Latour, Donna Haraway, and the anthology Models as Mediators edited by Mary Morgan and Margaret Morrison.
zacharykstine.bsky.social
Forgive one last point to vent about: The reviewer’s only other criticism was a lack of novelty. The obsession with novelty has made a lot of ML work insufferable so it’s unfortunate to see it here. But here’s who is apparently already making our points about how ML is used for humanities research:
zacharykstine.bsky.social
Our second reviewer (we only had two) voted strong reject. Their biggest problem with the paper seemed to be the lack of empirical work, and they claimed CHR only wants empirical stuff. If that’s correct, it’s a real blight on an otherwise cool conference. 17/
Reposted by Zachary K Stine
babeheim.bsky.social
How to quantify the impact of AI on long-run cultural evolution? Published today, I give it a go!

400+ years of strategic dynamics in the game of Go (Baduk/Weiqi), from feudalism to AlphaGo!
Miyagawa Shuntei's 1898 painting, "Playing Go (Japanese Chess)"
Reposted by Zachary K Stine
zacharykstine.bsky.social
Well, this got rejected today as a short position paper from CHR so I suppose I can share it now: arxiv.org/pdf/2508.00095

Here’s a hopefully coherent ramble about what we’re trying to make clear:
zacharykstine.bsky.social
I don’t why I’ve felt like there’s a need to understand brains in cogsci because of course you’re right, and I’ve certainly been exposed to some great work in the field that has nothing to do with brains. I think I just like to gatekeep myself out of things 🙃
zacharykstine.bsky.social
look dig around in some papers and see if our stuff might fit. I don’t know anything about real brains though. Really, I need to focus on journals generally instead of conferences. It’s always such a dice roll on reviewers.
zacharykstine.bsky.social
Your paper has been at the top of my to read pile and I am looking forward to finally getting to it soon! I really do think the rejection is overall reasonable, despite the points we’re making being important for the field. Re cog sci venues, I think I would be a bit of an imposter but I will …
zacharykstine.bsky.social
P.S. LLMs will make it even easier to hide our ignorance about what we’re actually doing and what claims we can actually make.