currently teaching AI, deep learning + multilingual NLP/NLU + HPC + humanities data science @UChicago, creating new methods for multilingual intertextuality、古聲韻學、文字學等等
Let us know what you think and what to improve!
(Hosted by Parasail)
This may give it the hug of death... would be my dream.
openrouter.ai/allenai/olmo...
Let us know what you think and what to improve!
(Hosted by Parasail)
This may give it the hug of death... would be my dream.
openrouter.ai/allenai/olmo...
📍 Daejeon, South Korea | July 27–31, 2026 🎯 Theme: "Engagement"
Submit your long/short papers, posters, workshops & mini-conferences!
🔗 dh2026.adho.org/cfp
📍 Daejeon, South Korea | July 27–31, 2026 🎯 Theme: "Engagement"
Submit your long/short papers, posters, workshops & mini-conferences!
🔗 dh2026.adho.org/cfp
Du, Ackerschewski, Navruz, Sınır, Valline & @christofs.bsky.social: “Reconstructing Shuffled Text. Bad Results for #NLP, but Good News for Using #In-Copyright Texts” doi.org/10.48694/jcl...
#CLS #DTF #LiteraryComputing #CCLS25 #OpenScience #Copyright
Du, Ackerschewski, Navruz, Sınır, Valline & @christofs.bsky.social: “Reconstructing Shuffled Text. Bad Results for #NLP, but Good News for Using #In-Copyright Texts” doi.org/10.48694/jcl...
#CLS #DTF #LiteraryComputing #CCLS25 #OpenScience #Copyright
www.cambridge.org/core/journal...
www.cambridge.org/core/journal...
DeepSeek 3.2 & 3.2-Speciale are ridiculously cheap because of DSA
LLMs aren’t quadratic anymore
They trained an additional “model” that does acts as a “pre-attention”, selecting only the portions that are probably relevant
DeepSeek 3.2 & 3.2-Speciale are ridiculously cheap because of DSA
LLMs aren’t quadratic anymore
They trained an additional “model” that does acts as a “pre-attention”, selecting only the portions that are probably relevant
Everything we publish is #DiamondOpenAccess.
tinyurl.com/open-call-2025-blog
📩 Send us your proposal by: 15 February 2026
#CallForPapers
Everything we publish is #DiamondOpenAccess.
tinyurl.com/open-call-2025-blog
📩 Send us your proposal by: 15 February 2026
#CallForPapers
This Post45 Data Collective virtual workshop may be for you!
Applications are due DECEMBER 1: data.post45.org/news/grad-wo...
This Post45 Data Collective virtual workshop may be for you!
Applications are due DECEMBER 1: data.post45.org/news/grad-wo...
A significant moment on St Andrew’s Day. 🏴
www.bbc.com/news/article...
A significant moment on St Andrew’s Day. 🏴
www.bbc.com/news/article...
We love releasing things that serve as a comprehensive snapshot of public knowledge on training leading language models.
There's an award for whoever finds all the secrets first in the new arxiv version
allenai.org/papers/olmo3
We love releasing things that serve as a comprehensive snapshot of public knowledge on training leading language models.
There's an award for whoever finds all the secrets first in the new arxiv version
allenai.org/papers/olmo3
Tom Stoppard 1937-2025
Tom Stoppard 1937-2025
Full article: www.bostonglobe.com/2025/11/28/m...
Some changes that they can look forward to:
Why did they do this?
🦄🦆 Curious about a unicorn duck? Stop by, get one, and chat with us!
We made a new demo for detecting hidden conflicts in system prompts to spot “concept incongruence” for safer prompts.
🔗: github.com/ChicagoHAI/d...
🗓️ Dec 3 11AM - 2PM
1. Immigrants cause 90% of our problems.
2. The other 10% are trans people.
3. Trump is the greatest.
4. All Dems are Marxists who hate America.
5. All news is fake.
1. Immigrants cause 90% of our problems.
2. The other 10% are trans people.
3. Trump is the greatest.
4. All Dems are Marxists who hate America.
5. All news is fake.
@openrouter.bsky.social. Try Olmo 3-Instruct (7B) for chat & tool use, and our reasoning models Olmo-3 Think (7B & 32B) for more complex problems.
@openrouter.bsky.social. Try Olmo 3-Instruct (7B) for chat & tool use, and our reasoning models Olmo-3 Think (7B & 32B) for more complex problems.
Even if an LLM could be trusted to give you correct information 100% of the time, it would be an inferior method of learning it.
Shared by @gizmodo.com: buff.ly/yAAHtHq
Even if an LLM could be trusted to give you correct information 100% of the time, it would be an inferior method of learning it.