Zoltan P Majdik
banner
zoltanmajdik.bsky.social
Zoltan P Majdik
@zoltanmajdik.bsky.social
Studying computational approaches to communication and rhetoric | Department of Communication @NDSU | @USCAnnenberg alumn |🇨🇭then 🇺🇸
Many thanks to @stonybrooku.bsky.social and Roger Thompson for asking me to speak on aligning AI/language models with civic-deliberative values. I still don’t know what a Seawolf is, but at least I now know that it’s a thing.
April 15, 2025 at 6:09 PM
I’m a sucker for vector embeddings and I’m not ashamed of it. Love this kind of research: aclanthology.org/2022.sdp-1.7/
Incorporating the Rhetoric of Scientific Language into Sentence Embeddings using Phrase-guided Distant Supervision and Metric Learning
Kaito Sugimoto, Akiko Aizawa. Proceedings of the Third Workshop on Scholarly Document Processing. 2022.
aclanthology.org
March 24, 2025 at 12:26 AM
Reposted by Zoltan P Majdik
Special 25th anniversary issue of POROI, reprinting a dozenish of its most influential articles. Very pleased to be between its covers with the likes of @leahcecc.bsky.social, @creekthinker.bsky.social, @rhetorologist.bsky.social, and @zoltanmajdik.bsky.social
pubs.lib.uiowa.edu/poroi/issue/...
POROI | Issue: Issue: 1(19) POROI’s First 25 Years: Mapping the Past, Charting the Future (2025)
pubs.lib.uiowa.edu
March 21, 2025 at 7:23 PM
Reposted by Zoltan P Majdik
Wow, it would be great if there were domain specific formal archictures, such as ontologies and other knowledge organization systems, which could verify ground truth for data! Oh wait, hooray, there is! www.isko.org/cyclo/ontolo...
January 29, 2025 at 1:49 PM
I'm re-reading Walter Ong for a graduate seminar on communication, language, and AI, and it's striking how relevant his work remains. Love this: "Technologies are artificial, but -- paradox again -- artificiality is natural to human beings."
January 23, 2025 at 3:45 PM
Reposted by Zoltan P Majdik
people argue whether mathematics is a social product or is free from human contingencies. here is a sense in which maths is a social construct.
this 1920 book of log & trig tables has values for an important trig function, the haversine. yet you've probably never heard of it. #MathSky
December 17, 2024 at 5:47 PM
100% this.
This footnote — that I mean "teaching students *to build*" — is kind of critical. Otherwise it won't be clear why the whole terrain of immediately pragmatic pedagogical discussion (how can we best integrate AI in existing classes) seems to me not a plan that responds to the medium-term threat.
December 15, 2024 at 8:29 PM
⬇️⬇️
[email protected] and I co-wrote this piece on why you are (probably) doing AI criticism wrong as a humanist and why we, in Critical AI Studies, need to do better methodologically than 'So I asked ChatGPT a question and now I have thoughts...'
Now available on arXiv at doi.org/10.48550/arX...
December 3, 2024 at 7:19 PM
Interesting example of auditing the rhetoricity of LLMs, with an unsurprising but nonetheless noteworthy insight:

"Interestingly, the Deceptive strategy, which allowed the model to fabricate information, was found to be the most persuasive overall."

www.anthropic.com/news/measuri...
Measuring the Persuasiveness of Language Models
Anthropic developed a way to test how persuasive language models (LMs) are, and analyzed how persuasiveness scales across different versions of Claude.
www.anthropic.com
November 30, 2024 at 7:53 PM
Reposted by Zoltan P Majdik
New paper: we confirm that works of fiction and nonfiction that were ahead of the curve textually also get more citations/attention. Intriguingly, what matters is not the average level but the most innovative/forward-looking *part* of the text. We did not expect to find that about fiction. +
There are many ways to identify texts that seem ahead of their time. Our CHR 2024 paper asks which measures of textual precocity align best with social evidence about influence and change.
November 27, 2024 at 2:51 PM
If the word “disrupt” hadn’t jumped the shark yet, this would be the moment.
Why? Who is this meant to be good for? Nobody wants your shit AI books, never mind 8,000 in a year. Just a terrible idea and a horrible road the publishing industry is heading down.
November 25, 2024 at 8:11 PM
Back at the NCA conference in New Orleans, where @sscottgraham.bsky.social and @colleenderkatch.bsky.social and I drank _all_ the Sazeracs just two years ago.
November 20, 2024 at 12:39 AM
Just returned from presenting at one of the most intellectually and socially stimulating conferences, on rhetoric and AI, hosted by the University of Tübingen’s RHET AI Center and the Max Planck Institute for Intelligent Systems. Amazing experience with great hosts and contributors.
November 19, 2024 at 7:02 PM
About to land in Denver for a short talk and roundtable discussion on distant reading and language modeling. In which we’ll start peeking into the blackbox.

[Citations for graphs still to be added.]
May 22, 2024 at 4:06 PM
Finally out!! I lost count of how often I wanted to reference this over the last few months of research:
May 16, 2024 at 6:32 PM
“The social ills of computing will not go away simply by integrating more ethics instruction or codes of conduct into computing curricula… move the academic discipline of computing away from engineering-inspired curricular models.” 🧪

cacm.acm.org/research/why...
Why Computing Belongs Within the Social Sciences – Communications of the ACM
cacm.acm.org
March 15, 2024 at 2:43 PM
I'm going to publish a paper that consists of nothing but ChatGPT language artifacts. It'll be known as the So-HAL Hoax.
Checking google scholar for recent articles with the phrase "certainly, here is a":
March 14, 2024 at 6:38 PM
I know I'm late to the game, but Github Copilot integrated into an IDE is scary good. It can take my poorly written code with its hacky logic and answer my not particularly well-defined questions about how to add some new function. Truly impressed.
March 13, 2024 at 10:07 PM
“Anthropic will use AWS Trainium and Inferentia chips to build, train and deploy its future foundation models.”

Apparently, AI/deep learning chip makers are taking naming cues from cheap OTC drugs and vitamins.
March 12, 2024 at 2:20 PM
This post is a great example of why open models (and transparency about training data/procedures) are so important. Is Claude's ability to translate Circassian impressive? Maybe -- but there's no way to know unless we can see what's in the training set. >> www.reddit.com/r/singularit...
March 6, 2024 at 5:00 PM
While everyone is talking about Sora, this move toward small and easily malleable models might be more exciting in the long run: www.axios.com/2024/02/21/g...
February 21, 2024 at 2:57 PM
Worst time for a visual migraine+aura: the dotting i/crossing t/11pt font/deadline imminent/grants.gov final checks phase of a grant proposal submission.
January 4, 2024 at 5:28 PM
locusmag.com/2023/12/comm... | It’s not a bad article, but some of the statements are just so eye-poppingly off the mark:

“The largest of these models are incredibly expensive. They’re expensive to make, with billions spent acquir­ing training data, labelling it […]”
December 23, 2023 at 4:48 PM
This is one of those "why didn't *I* think of doing it??" papers. Deceptively simple in hindsight, but takes a lot of creativity to actually make happen.
Need to adjust language models to reflect a particular year or period? Researchers at UW show that you can do this in a ridiculously straightforward way. Generate “time vectors”; then interpolate or (um… ) extrapolate them! huggingface.co/papers/2312.... #MLSky
December 22, 2023 at 10:33 PM
Reposted by Zoltan P Majdik
you: the existence of a "holy infant, so tender and mild" implies the existence of a crispy spicy infant

jacques derrida, spiking his phone like a football: yes! yes! the world finally understands!!
December 9, 2023 at 3:23 PM