Dario Paape
@dariopaape.bsky.social
140 followers 93 following 85 posts
psycholinguistics @ Potsdam, Germany https://d-paape.github.io
Posts Media Videos Starter Packs
Reposted by Dario Paape
manoelhortaribeiro.bsky.social
Computer Science is no longer just about building systems or proving theorems--it's about observation and experiments.

In my latest blog post, I argue it’s time we had our own "Econometrics," a discipline devoted to empirical rigor.

doomscrollingbabel.manoel.xyz/p/the-missin...
Reposted by Dario Paape
jenbanim.mastodo.neoliber.al.ap.brid.gy
If you mash the audio example buttons on the wikipedia page for the IPA vowel sounds you can create a choir of mildly disgusted men
dariopaape.bsky.social
I guess my intuition about an average is based on the idea that "poor" and "rich" should share a common scale, and a natural way of thinking about the statement would be to take the scale's midpoint as a point of reference, even if we're not doing arithmetic.
dariopaape.bsky.social
Maybe, though I'm not sure. Turns out that this part is actually not very important to the overall plot, which revolves around the fact that the bear is a cursed prince. But the daughter does marry the bear and the father ends up living in a castle.
dariopaape.bsky.social
So if someone is "X poor" and you promise to make them "X rich", is X the difference between their $$$ and some average, and you're promising to basically flip the sign of that difference? I want to understand precisely what the bear is offering here. #linguistics
dariopaape.bsky.social
Can we accurately model reading time patterns by assuming that readers predict upcoming words, but that their memory of the sentence context that the prediction is based on is imperfect? New preprint led by Johan Hennert: osf.io/preprints/ps...
OSF
osf.io
Reposted by Dario Paape
olivia.science
Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users — in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues. Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA). Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe. Protecting the Ecosystem of Human Knowledge: Five Principles
Reposted by Dario Paape
edzitron.com
Newsletter: My 16,000 word opus - How To Argue With An AI Booster, a comprehensive guide to arguing with AI boosters, addressing both their bad faith debate style and their specific (and flimsy) arguments as to why generative AI is the future.

www.wheresyoured.at/how-to-argue...
How To Argue With An AI Booster
Editor's Note: For those of you reading via email, I recommend opening this in a browser so you can use the Table of Contents. This is my longest newsletter - a 16,000-word-long opus - and if you like...
www.wheresyoured.at
dariopaape.bsky.social
Inscrutability of reference still going strong. I was totally assuming that the little guy in the picture was the nurdle and immediately went "How dare you!" #linguistics
dariopaape.bsky.social
There‘s probably a reason that these guys aren‘t placed right next to Hagebuttentee, Kamillentee and Pfefferminztee #linguistics
dariopaape.bsky.social
I think this is the first time ever that I've seen this bracketing in the wild. [[pet [dog and cat]] poo] #linguistics
Reposted by Dario Paape
melaniemitchell.bsky.social
In a stunning moment of self-delusion, the Wall Street Journal headline writers admitted that they don't know how LLM chatbots work.
dariopaape.bsky.social
InStyle is perhaps one of the last places where I would have looked for linguistic pedantry. The temptation to write "1 Preis" ("Eins Preis") must have been enormous. #linguistics
dariopaape.bsky.social
They turned the internet into an IRL magazine and are selling it for 9 Euros
Reposted by Dario Paape
tmalsburg.bsky.social
I'm offering a 3-year PhD position with benefits (1-year extension possible). Research topic open but broadly in incremental sentence comprehension. If you're into eye-tracking, even better! No teaching until Summer 2027, light teaching after that (English). Official ad soon. Please share 🙏
dariopaape.bsky.social
Wondering if the increased frequency of these kinds of headlines makes people better at processing double negation. "Trump NOT allowed to NOT allow X" #linguistics
dariopaape.bsky.social
arstechnica.com/ai/2025/06/n...

The really interesting part is that the LLM (or LRM or whatever) enthusiasts quoted in the article defend the models' "reasoning" by basically saying that it's actually somehow clever of them to be wrong
dariopaape.bsky.social
If I understand correctly, the "bright" part is that they don't (currently) intend to weaponize the thing - it could actually serve as a source of "clean" energy.
dariopaape.bsky.social
We do too! 😉 And no, we will not be doing rolling reviews, but we wanted to give people plenty of time to prepare their submissions before the deadline.
dariopaape.bsky.social
Yes, you're probably right. It's only supposed to answer IDK if *nobody knows*, I guess. But one could still generate artificial data for those gaps in principle. Also, Titus's question was only how to make LLMs say "I don't know", so my answer stands. ;)
dariopaape.bsky.social
So basically there's only positive evidence, no negative evidence of knowledge gaps. Maybe generate lots of artificial data with "I don't know" responses and see what happens? I can't be the first one to come up with this.