The Grumpy Data Scientist
@zaqdelinguist.bsky.social
210 followers 190 following 380 posts
Computational sociolinguist interested in how groups negotiate meaning and social coordination. Because I’ve been working with transformers since 2018, very not into AI hype. https://zaqari.github.io
Posts Media Videos Starter Packs
zaqdelinguist.bsky.social
If the OP author truly believed that then I challenge them to think about why it is they decided to defend LLMs rather than agreeing with and extending the prior utterance they responded to (like an LLM would).
timprice.bsky.social
I have pondered this sentence and am now dumber for having done so
That's a very dismissive construct of LLMS, IMHO. Yes they are stochastic and yes they are 'robotic' but here's a sentence for you to ponder:

Humans are just advanced language output algorithms that spit out returns based on input and their reference library.
zaqdelinguist.bsky.social
This is why computer scientists (and computer science fetishists in silicon valley) need to take a gorram sociolinguistics class.
Reposted by The Grumpy Data Scientist
olivia.science
We end on an appeal to our fellow psychologists:

"To sit idly by while deskilling and displacing of [our]selves is normalised [...] serves [solely] the technology sector, which avoids criticism and self-reflection and prefers pseudoscience and misinformation."

10/
8 Do not Embrace AI
In this paper, we unpacked why we think psychologists need
to be on high alert — not just to avoid another replication crisis,
but to avoid the total collapse of our science. What we signpost
in Table 1 may have been novel to readers until this point, but
the deeper problems are absolutely known. Also, as Crystal Steltenpohl et al. (2023, pp. 9–10) state: “Intentions alone are not
enough to move science forward. Creating responsible, considered processes for rigorously transparent open science requires
involving interested parties from a wide range of backgrounds,
perspectives, research areas, and training paradigms.”
Indeed, because many such warnings go unheeded — such as
the need for a cultivation of shared values and especially the principles of impartiality of researchers and academic freedom from
corporate influence — we find ourselves in polycrises that affect
our universities, political systems, planet, and ultimately all humanity. “When historians of science look back on the 2010s in
social and personality psychology, the decade will likely stand out
as a period of exceptional doubt and self-scrutiny in the field.”
(Schiavone & Vazire, 2023, p. 710) Why did we ever stop? Should
we ever stop? Importantly, Hazel Rose Markus (2005, p. 180, emphasis
added) explains that: “Social psychology is often defined as the
study of how people respond to and are influenced by other people”
Algorithms, chatbots, LLMs, machines, models, inanimate objects are not people — they are the products of people (Guest,
2024, 2025). And to paraphrase Rae Carlson (1984): What’s social about chatbots? Where’s the person in an LLM?
We must sure up our subfields from the slow but certain corrosive power wielded by the harmful nonsense that is modern
displacement AI. To sit idly by while deskilling and displacing of
our students, participants, and selves is normalised — or worse
still to profit from it — serves not science but the technology sector, which avoids criticism and self-reflection and prefers pseudoscience and misinformation.
Reposted by The Grumpy Data Scientist
autisticadvocacy.org
Generative AI changes what things mean because it focuses on words, not the meaning. It adds words, changes meaning, and makes "plain language" that is not accessible. We know, because we tested it. Generative AI has no place in plain language. autisticadvocacy.org/2025/07/asan...
Grey textured background. In the bottom right corner, there is a red circle with a line through it. Inside there is blue text that reads AI. Black text reads: ASAN Says No Generative AI in Plain Language! Full plain language statement on our website. The ASAN logo is at the bottom.
Dark grey textured background. There’s a lighter grey box with text that reads: Artificial intelligence is when a computer program does things that normally need to be done by humans. We call artificial intelligence “AI” for short. There are lots of different kinds of AI.
The way AI works is by using data from people. Data is information a computer can read. AI programs read data made by people to understand what to do.
Generative AI is a specific kind of AI. Generative AI can use data to make new things.
Generative AI can create many kinds of new things, like:
text
images
music/movies
and more
People have already started using generative AI to write plain language.
Grey textured background. Text reads: In our statement, we explain the reasons why people should not use generative AI to write in plain language. Plain language is an important part of making things accessible for people with disabilities. Using generative AI makes “plain language” that is not actually accessible. We hope anyone who writes in plain language will not use generative AI.
Here are the reasons we talk about in our statement: 
Generative AI changes what things mean
Plain language is an idea that is too new for generative AI to understand
Generative AI focuses on words, not ideas
Generative AI has discrimination built-in
Plain language is by and for disabled people
Read our full statement to learn more about each of these reasons.
Dark grey textured background. There is a blue “AI” with a red circle and line over it. The ASAN logo is at the bottom.
 There’s a light grey box with the following text: ASAN does not use generative 
AI for plain language. 
We hope people writing in plain language understand why you shouldn’t use generative AI for plain language. 
People writing in plain language should always talk to disabled people first. People with disabilities can give feedback to make stronger plain language papers. We do not need AI to do this work when we already know disabled people can. Nothing about us without us!
Reposted by The Grumpy Data Scientist
oxinabox.bsky.social
as a girl with a PhD in natural language processing and machine learning it's actually offensive to me when you say "we don't know how LLMs work so they might be conscious"

I didn't spend 10 years in mines of academia to be told ignorance is morally equal knowledge.

We know exactly how LLMs work.
Reposted by The Grumpy Data Scientist
wolvendamien.bsky.social
"If it is inaccessible to the [marginalized/minoritized/oppressed], it is neither radical nor revolutionary."
Reposted by The Grumpy Data Scientist
boy-rat.bsky.social
I’m so fed up of seeing Ai slapped on everything!

I spent 15 years building and running an Ai fraud detection company

LLMs are NOT Ai 🤖 not even remotely related and as for Sam Altmans claim that they don’t fully understand how their system works… I’m happy to explain it to him using crayons 🖍️
zaqdelinguist.bsky.social
I’m curious: I have a hunch about the information theoretic reasons behind why LLM outputs might trigger the positive assessment that it does in some users/what it is about the model that exhibits “sycophantic” behavior. Anyone we know have access to the transcripts openAI released in court?
zaqdelinguist.bsky.social
Remember kids, the next time he forgets his login credentials it’s CENSORSHIP BY thE TERRible LeFt
zaqdelinguist.bsky.social
I’m not convinced the brain worm is dead.
science.org
Federal officials said the FDA would approve a drug called leucovorin as “the first FDA-recognized treatment pathway for autism”—an unusual move for a pill only tested in a few small studies. https://scim.ag/48xOJRj
Trump’s autism initiative embraces little-tested vitamin as a treatment
FDA to approve leucovorin despite questions about whom it might help
scim.ag
zaqdelinguist.bsky.social
:( crunch time blues.

We salute you and your effort 🫡
zaqdelinguist.bsky.social
This. Not all peer reviewers treat authors with respect, but a majority of them do. And I will 100% own up to the fact that peer review has not only shored up my writing and occasionally my methods, but even got me to think about extensions and improvements for subsequent work.
meagan-g-phelan.bsky.social
During #PeerReviewWeek, @science.org
heard from individual authors who talked about how peer review strengthened their work, making it functionally richer, more accessible, more pointed regarding limitations--sometimes in collaboration with preprint review. See author posts in thread below. 🧵
Reposted by The Grumpy Data Scientist
enceladosaurus.bsky.social
I made the mistake of devoting myself to AI4SG early in my career because I wanted to believe I could use my skills to make the world a better place. Instead, I found exactly what Dr. Birhane describes - particularly with respect to accountability laundering. I'll share one example🧵
abeba.bsky.social
AI is the wrong tool to tackle complex societal & systemic problems. AI4SG is more about PR victories, boosting AI adoption (regardless of merit/usefulness) & laundering accountability for harmful tech, extractive practices, abetting atrocities. yours truly
www.project-syndicate.org/magazine/ai-...
The False Promise of “AI for Social Good”
Abeba Birhane refutes industry claims about the technology's potential to solve complex social problems.
www.project-syndicate.org
zaqdelinguist.bsky.social
Dunno if this would interest you, but you might like the book “Street Data” by Shane Safir. It was one of my favs last year and a pretty harsh look at how folks metricize learning (I’m unashamedly trying to get more folks to talk about that book with)
zaqdelinguist.bsky.social
Oh man. I hope you realize that I’m going to devour this paper. And I may want to hit you up about a project some folks and I are doing at UCLA that’s pretty clearly related based on the abstract if you’d be interested in that.
zaqdelinguist.bsky.social
Gaddammit. I just sent a paper out for review on a related topic and am embarrassed I missed this paper 🙈
zaqdelinguist.bsky.social
Your paper is hitting a really important chord with folks on top of being well written and well argued? 🤓
zaqdelinguist.bsky.social
Exactly. You show up to these things and it’s basically a sales pitch/product demo.
aerialeverything.cryptoanarchy.network
This is why I have a deep suspicion of the "AI literacy" workshops being floated to profs by schools mainlining AI. Ed tech has co-opted "AI literacy" to mean "learn how to use our products" rather than "learn how AI works."
fabiochiusi.bsky.social
“people with lower AI literacy are typically more receptive to AI,"

"people with lower AI literacy are more likely to perceive AI as magical and experience feelings of awe in the face of AI's execution of tasks that seem to require uniquely human attributes."

futurism.com/more-people-...
zaqdelinguist.bsky.social
Freaking wish the UC board of regents would read this . . .
olivia.science
Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users — in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues. Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA). Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe. Protecting the Ecosystem of Human Knowledge: Five Principles