Antonis Anastasopoulos
antonisa.bsky.social
Antonis Anastasopoulos
@antonisa.bsky.social
350 followers 460 following 18 posts
Assistant Prof at GMU. NLP, CompLing, ML, and other things language+humans
Posts Media Videos Starter Packs
Reposted by Antonis Anastasopoulos
We’re having a (human) language acquisition meetup at #ACL2025. RSVP on Whova for updates!
Reposted by Antonis Anastasopoulos
I'm attending #ACL in Vienna this week. 🇦🇹 We're running a BoF on Language Technologies for Crisis Response and Preparedness, co-hosted w/Will Lewis

📆 Wed. 30th, 11am. Room 1.33.
You can join us virtually too. DM me if you’re interested ✨

@wildlewis.bsky.social @antonisa.bsky.social
Reposted by Antonis Anastasopoulos
Looking forward to this year's edition! With great speakers: Ryan McDonald Yulan He @vn-ml.bsky.social @antonisa.bsky.social Raquel Fernandez @annarogers.bsky.social Preslav Nakov @mohitbansal.bsky.social @eunsol.bsky.social Marie-Catherine de Marnefffe !
📢 10 Days Left to apply for the AthNLP - Athens Natural Language Processing Summer School!
✍ Get your applications in before June 15th!
athnlp.github.io/2025/cfp.html
Reposted by Antonis Anastasopoulos
📢 It's official! Save the Date!

The #AthNLP Summer School is coming!
📅 4-10 September 2025
📍 Athens, Greece
@athnlp.bsky.social a top NLP summer school, offers a week of lectures, workshops, and networking.
📖 athnlp.github.io/2025/index.h...
#AthNLP2025 #NLP #AI #SummerSchool
📢 I am looking for a postdoc for the next academic year!
(Due to the funding source, US persons preferred)

Interested in multimodal LLMs and their application to education domains (as well as multilinguality, cross-lingual, and low-resource learning)?

If yes, send me a message here or an email!
Usually on an ipad these days...
I think I just hate having to write notes in the middle of a line in the 1-col papers, so it's probably about how close the space is to the text as opposed to how abundant it is
I find the 2-col format easier for reviewing/note-taking/suggesting edits, because the info is spread out vertically and I have more margin space for notes closer to the actual text.

But for just reading, agreed, we should just produce dynamic pubs that people can customize to their preferences.
Reposted by Antonis Anastasopoulos
Fucking hell... Seriously, there's no going back from this. If it were not from Haaretz, nobody would believe it. Worse part? Nobody fucking cares.
archive.ph/NVG4p#select...
Another example of a point that supports this argument is the observation that more than 50% of the "facts" that are available in Wikipedia/Wikidata, they are only available or retrievable in a _single_ language. The observation is hidden somewhere in this paper
aclanthology.org/2020.emnlp-m...
aclanthology.org
Reposted by Antonis Anastasopoulos
Excited to announce the launch of our ML-SUPERB 2.0 challenge @interspeech.bsky.social 2025! Join us in pushing the boundaries of multilingual ASR and LID! 🚀

💻 multilingual.superbbenchmark.org
In the above examples, some people had a problem, and a computer scientist steps in to help produce a solution, and they write a paper about it so that if anyone else has a similar problem in the future, there's a guide to solving it. How's that not enough of a contribution?
next thing you know you realize that you need and you starr building a simplification dataset for the contact language (let's pick Rioplatense Spanish for this example), or building NER tools that can handle the specific regional orthographic variations of Cypriot Greek.
Or they might result from the specific needs of a scientific (or not) team. Hypothetical example: a sociologist teams up with a meteorologist and a computer scientist to figure out how to best convey changing climate threats to an indigenous community, and ...
Or the leaders of a different community might actually want an LLM to ensure their language has the same perceived prestige and tool access as a more dominant language that might be threatening theirs.
A lot of the "narrow"-focus datasets on otherwise underserved languages might be the result of the specific needs of the community: a community might not need an LLM, but they might need a morphosyntactic analyser that they can deploy in a classroom to teach their language.

Sure, If your space of scientific questions is only "how can I train a model to do X?", then the slight variation of "how can I train a model to do X in language Y?" is not too interesting in and of itself (although there might be arguments just for that, see above)
The extent to which we understand them, in our current setting, is measured by the datasets that you complain about.
Yes, we might _believe_ that model X will be able to perform task Y in some language Z and context W, but we don't _know_, not until we actually try (and often find "...Not quite").
Human beings have come up with 7000+ ways to communicate. And each of these modes encodes unique sociocultural, historical, (and potentially more types of *al) information. So being able to understand (or have machines understand) them ensures that we don't lose part of our collective knowledge.
Late to the party, but after reading most subthreads, I'll bite.
I think your whole premise suggests a very narrow view of what is science or scientific contributions.
Reposted by Antonis Anastasopoulos
🌍🎤 ML-SUPERB 2.0 Challenge at #Interspeech2025: Push the boundaries of cross-lingual speech processing!
🚀 154 languages & 200+ accents/dialects
📊 Live leaderboard & online evaluation! Join now: multilingual.superbbenchmark.org
Reposted by Antonis Anastasopoulos
We had a great discussion with @robertlemos.bsky.social from Dark Reading about our new paper "Hacking Back the AI-Hacker: Prompt Injection as a Defense Against LLM-driven Cyberattacks"(arxiv.org/abs/2410.20911). Mantis turns the hardness of dealing with prompt injections into an opportunity!
AI About-Face: 'Mantis' Turns LLM Attackers Into Prey
Experimental counter-offensive system responds to malicious AI probes with their own surreptitious prompt-injection commands.
www.darkreading.com