Adam Brandt
@adambrandt.bsky.social
740 followers 200 following 40 posts
Senior Lecturer in Applied Linguistics at Newcastle University (UK). Uses #EMCA to research social interaction, and particularly how people communicate with (and through) technologies, such as #conversationalAI.
Posts Media Videos Starter Packs
Reposted by Adam Brandt
telescoper.bsky.social
A meme for the modern university...
Meme showing a worker labelled "academic staff" digging a hole in the ground while 10 others look labelled with management titles such as "Director of Human Resources" look on. The caption underneath reads "The only way we can cut costs is to reduce the number of academic staff..."
Reposted by Adam Brandt
lizstokoe.bsky.social
#EMCA folks - if you're working on #chatbots or other conversational technologies and are in London on 16th July, this event looks fantastic!

#Conversational #AI Meetup London, Weds 16th July, 18.00.

Including a talk on LLM-Based conversational agents in hospital robots

🔗 lu.ma/5zzqtt33
Image of event poster with speakers 

18:15 – 18:45 | Angus Addlesee — Applied Scientist at Amazon
🤖 Deploying an LLM-Based Conversational Agent on a Hospital Robot
Real-world challenges and insights from deploying a voice-enabled robot in a clinical setting.

​18:50 – 19:15 | Tatiana Shavrina, PhD — Research Scientist at Meta
🔬From conversational AI to autonomous scientific discovery with AI Agents.
The talk will cover the overview of the current challenges and new opportunities for LLM and LLM-based Agents.

​19:15 – 19:30 | Short Break ☕

​19:30 – 19:55 | Lorraine Burrell — Conversation Design Lead at Lloyds Banking
Talk TBA

​20:00 – 20:30 | Alan Nichol — Co-founder & CTO at Rasa
🔧 Why Tool Calling Breaks Your AI Agents—and What to Do Instead
Explore the pitfalls of tool use in agent design and how to avoid them.
adambrandt.bsky.social
🚨Save the date!🚨
🗣️Spread the word! 🗣️

The next ICOP-L2 conference will be held at Newcastle University on 🗓️24-26 August 2026🗓️
icopl2.org

More details on plenary speakers, workshops, the call for abstracts, and more, coming soon!

#EMCA #L2interaction
adambrandt.bsky.social
Thank you Liz! ❤️
adambrandt.bsky.social
Super interesting (and useful) Special Section of ROLSI on all things ethics and data collection for #EMCA research.

Well done (and thank you!) to all involved in putting this together.
rolsi-journal.bsky.social
A run through of the articles in the Special Section on "The Ethics of Collecting, Curating and Sharing Data in Conversation Analysis", now online (1st of 6 posts)

#EMCA

1. Joyce et al
www.tandfonline.com/doi/full/10....
The title and abstract of Joyce et al "accessing and using data without informed consent: guiding principles from conversation analysis" in Rolsi 2025
Reposted by Adam Brandt
adambrandt.bsky.social
Please read and kindly consider signing in support of academics at Newcastle University (including from our team in Applied Linguistics & Communication) who are facing the threat of redundancy this summer:
www.change.org/p/end-unnece...
Sign the Petition
End unnecessary redundancies at Newcastle University
www.change.org
Reposted by Adam Brandt
dote.bsky.social
We are looking for CLAN and ELAN users interested in converting 1 or 2 transcripts to the DOTE format. We have tested a Python script the last couple of days - and it would be interesting to try with some "real" data. Please get in touch. #DOTE #ELAN #CLAN #transcription #EMCA #VIDEO
adambrandt.bsky.social
Please read and kindly consider signing in support of academics at Newcastle University (including from our team in Applied Linguistics & Communication) who are facing the threat of redundancy this summer:
www.change.org/p/end-unnece...
Sign the Petition
End unnecessary redundancies at Newcastle University
www.change.org
adambrandt.bsky.social
This sounds fantastic!
charlesantaki.bsky.social
Wondered about (or goggled at) how politicians use statistics?

Michael Billig and Cristina Marinho have a new book out on just that subject, and it's a cracker.

www.cambridge.org/core/books/p...
Politicians Manipulating Statistics
Cambridge Core - Politics: General Interest - Politicians Manipulating Statistics
www.cambridge.org
Reposted by Adam Brandt
dingemansemark.bsky.social
A year ago our faculty commissioned & adopted guidance on GenAI and research integrity. Preamble below, pdf at osf.io/preprints/os..., text also at ideophone.org/generative-a...

Key to these guidelines is a values-first rather than a technology-first approach, based on NL code of research conduct
Preamble

All research at our institution, from ideation and execution to analysis and reporting, is bound by the Netherlands Code of Conduct for Research Integrity. This code specifies five core values that organise and inform research conduct: Honesty, Scrupulousness, Transparency, Independence and Responsibility.

One way to summarise the guidelines in this document is to say they are about taking these core values seriously. When it comes to using Generative AI in or for research, the question is if and how this can be done honestly, scrupulously, transparently, independently, and responsibly.

A key ethical challenge is that most current Generative AI undermines these values by design [3–5; details below]. Input data is legally questionable; output reproduces biases and erases authorship; fine-tuning involves exploitation; access is gated; versioning is opaque; and use taxes the environment.

While most of these issues apply across societal spheres, there is something especially pernicious about text generators in academia, where writing is not merely an output format but a means of thinking, crediting, arguing, and structuring thoughts. Hollowing out these skills carries foundational risks.

A common argument for Generative AI is a promise of higher productivity [5]. Yet productivity does not equal insight, and when kept unchecked it may hinder innovation and creativity [6, 7]. We do not need more papers, faster; we rather need more thoughtful, deep work, also known as slow science [8–10].

For these reasons, the first principle when it comes to Generative AI is to not use it unless you can do so honestly, scrupulously, transparently, independently and responsibly. The ubiquity of tools like ChatGPT is no reason to skimp on standards of research integrity; if anything, it requires more vigilance.
adambrandt.bsky.social
My reading of it would definitely be Alexa's first reading (we use 'us' that way round these parts, and I can imagine someone saying this with this meaning here, although can't say it's common).
adambrandt.bsky.social
I’m sorry you had that experience, but “No cabs” is a beautiful, almost poetic, ending (for us as readers - hope you didn’t end up having to walk!).
adambrandt.bsky.social
No we didn’t, but that’s a lesson learned for next time.
Reposted by Adam Brandt
jksteinberger.bsky.social
As promised, here are the slides I shared with students to convince them to NOT use chatGPT and other artificial stupidity.

TL;DR? AI is evil, unsustainable and stupid, and I'd much rather they use their own brains, make their own mistakes, and actually learn something. 🪄
NO CHATGPT Or other artificial stupidity: motivation
First, clarity on distinguishing AIs:
Non-generative: grammar aid, translation, dictionary, text-to-audio (e.g. Natural Reader): no problem
As long as you use the appropriate tools (least intensive in data and server energy use).
Why? Because you provide the content. Your brain is doing the most important work
Generative: ChatGPT & Co. 
You only supply the prompt, the AI supplies the content.
Why is this delegation of work problematic?
3 domains: ethical, environmental, intellectual engagement.

(Caveat: generative is probably ok for computer programming, where it can be useful and save time. Not relevant to this class.)
1) AI and ethics
Mass theft of all and everything
«learning» on books, articles, blogs, social media, images, music, cultural production, without  permission of authors/creators, and leading to their mass joblessness. Profits are not reditributed to originators. 
Permanent destruction of the mental health of underpaid precarious tech workers in the Global South (Kenya, Philippines …):
«correction» to avoid production of violent and pedophile contents etc, tech workers are obliged to watch and correct super violent contents for days on end, leading to extreme psychological suffering and trauma, from which recovery is doubtful. No or little compensation (certainly not at the level of the suffering inflicted). 
In short, an industry built on theft of real human creation and sacrifice of real human health, profiting a few megafortunes. 
2) AI and (un)sustainability

Massive consumption of electricity, water, server capacity for generative AI. 
Outcome: keep fossil fuel companies in business, using up new renewable capacity, without any satisfaction of basic human needs.
Massive misappropriation of the finance necessary for climate and ecological action (renewable generation, efficiency and retrofit for buildings, public transit, infrastructures for cycling etc) towards AI industry. 
Overall: undermine climate action, reinforce fossil industry, waste resources necessary for human development. 
3) AI and intellectual engagement

First, what learning is (or should be) about:
The goal should not (only) be the reproduction of «correct» knowledge,
But mainly personal engagement and experience of thinking about topics of interest. Personal engagement = using one’s own brain. 
The most important activity for learning and intellectual engagement is the experience of making one’s own mistakes, by trial and error, corrections based on new ideas, starting over again. Learning to recognise nuances, knowledge gaps, better explanations 
This kind of learning is possible only through using your own brain, not AI. 
Also, Ais are not «intelligent». At all. 
They simply reproduce pre-existing patterns. They «bullshit», invent false references, false facts, false data, simply because those sound plausible. VERY DANGEROUS. 
If you learn how to NOT use AI, and how to research facts and data on your own, this will serve you and your communities for the rest of your life.
adambrandt.bsky.social
Our ‘Late Breaking Work’ submission for CHI2025 in Yokohama has sadly been rejected. Some positive comments from reviewers, but rejected on the grounds of not enough statistical data, lack of details about ethical approval, and lack of detail about #EMCA analytic process (is it thematic analysis?) 🤦🏻‍♂️
adambrandt.bsky.social
Aside from the obvious quality of observation and argument, I’m always impressed by the work of #LSE (and Liz!) in how they present their ideas in an engaging and interesting way, for all audiences. If only other institutions aspired to such standards of academic engagement.
lizstokoe.bsky.social
Can #AI understand 'human distress'? And where might a conversation analytic answer to this question start?

#LSE is publishing a new series of video shorts on the impact of AI technology and it's great to have #EMCA in it!

youtu.be/rwBgZxmeCDM?...

🧵 1/3

@lsepbs.bsky.social
Can AI understand distress? The hidden signals in emergency calls I LSE Research
YouTube video by LSE
youtu.be
adambrandt.bsky.social
What a brilliant new edition of the ISCA newsletter - a nice reminder that there are so many fantastic conferences and seminars covering many areas of #EMCA / #ILEMCA. Looking forward to seeing what 2025 brings our way!

Direct link to the newsletter:
www.conversationanalysis.org/members-foru...
Reposted by Adam Brandt
drdeclankavanagh.bsky.social
Hats off to anyone in UK academia right now who is turning up for work, marking essays, meeting students, giving lectures, holding seminars, being there for colleagues and also facing the threat of redundancy, voluntary or compulsory. It’s a grim and surreal time #UKhigherEd
adambrandt.bsky.social
Another excellent-sounding conference, aiming to bring together language researchers and people from the tech industry.
Abstract submission deadline tomorrow:
sites.google.com/view/humans-...
Humans, Machines, Language - 2025 conference
Find us on the Sociolinguistic Events Calendar: https://baal.org.uk/slxevents/
sites.google.com
adambrandt.bsky.social
Thank you for sharing these, Gene - they are wonderful.
In this one, aside from your fantastic and generous explanation, I am struck by the intelligence and curiosity in the student's email (alongside their wonderful formulations - 'what is going down', 'throw out a research project idea').
adambrandt.bsky.social
Well that's certainly another way, although the advantages are probably more narrow than the other two.
adambrandt.bsky.social
Hoping it's a case of both of the above for you, Charles!