Srishti
@srishtiy.bsky.social
290 followers 430 following 14 posts
ELLIS PhD Fellow @belongielab.org | @aicentre.dk | University of Copenhagen | @amsterdamnlp.bsky.social | @ellis.eu Multi-modal ML | Alignment | Culture | Evaluations & Safety| AI & Society Web: https://www.srishti.dev/
Posts Media Videos Starter Packs
Pinned
srishtiy.bsky.social
I am excited to announce our latest work 🎉 "Cultural Evaluations of Vision-Language Models Have a Lot to Learn from Cultural Theory". We review recent works on culture in VLMs and argue for deeper grounding in cultural theory to enable more inclusive evaluations.

Paper 🔗: arxiv.org/pdf/2505.22793
Paper title "Cultural Evaluations of Vision-Language Models
Have a Lot to Learn from Cultural Theory"
Reposted by Srishti
rnv.bsky.social
Happy to share that our work on multi-modal framing analysis of news was accepted to #EMNLP2025!

Understanding news output and embedded biases is especially important in today's environment and it's imperative to take a holistic look at it.

Looking forward to presenting it in Suzhou!
rnv.bsky.social
🚨New pre-print 🚨

News articles often convey different things in text vs. image. Recent work in computational framing analysis has analysed the article text but the corresponding images in those articles have been overlooked.
We propose multi-modal framing analysis of news: arxiv.org/abs/2503.20960
Reposted by Srishti
iaugenstein.bsky.social
🎓 Looking for PhD opportunities in #NLProc for a start in Spring 2026?

🗒️ Add your expression of interest to join @copenlu.bsky.social here by 20 July: forms.office.com/e/HZSmgR9nXB

Selected candidates will be invited to submit a DARA fellowship application with me: daracademy.dk/fellowship/f...
Microsoft Forms
forms.office.com
Reposted by Srishti
delliott.bsky.social
📣 I am happy to support Ph.D applications to the Danish Advanced Research Academy. My main areas of research include multimodal learning and tokenization-free language processing. Feel free to reach out if you have similar interests! Applications due August 29 www.daracademy.dk/fellowship/f...
Dara
www.daracademy.dk
Reposted by Srishti
belongielab.org
Congratulations Andrew Rabinovich (PhD ‘08) on winning the Longuet-Higgins Prize at #CVPR2025! (1/2)
Reposted by Srishti
serge.belongie.com
My favorite part of going to conferences: @belongielab.org alumni get-togethers! A big thank you to Menglin for coordinating the lunch at @cvprconference.bsky.social 🙏

Left: Tsung-Yi Lin, Guandao Yang, Katie Luo, Boyi Li; Right: Menglin Jia, Subarna Tripathi, Ph.D., Srishti, Xun Huang
Reposted by Srishti
vlms4all.bsky.social
Panel talk happening right now at @vlms4all.bsky.social ! Come join us at #CVPR25 (room: 104E)
Reposted by Srishti
lchoshen.bsky.social
🚀 Technical practitioners & grads — join to build an LLM evaluation hub!
Infra Goals:
🔧 Share evaluation outputs & params
📊 Query results across experiments

Perfect for 🧰 hands-on folks ready to build tools the whole community can use

Join the EvalEval Coalition here 👇
forms.gle/6fEmrqJkxidy...
[EvalEval Infra] Better Infrastructure for LM Evals
Welcome to EvalEval Working Group Infrastructure! Please help us get set up by filling out this form - we are excited to get to know you! This is an interest form to contribute/collaborate on a research project, building standardized infrastructure for AI evaluation. Status Quo: The AI evaluation ecosystem currently lacks standardized methods for storing, sharing, and comparing evaluation results across different models and benchmarks. This fragmentation leads to unnecessary duplication of compute-intensive evaluations, challenges in reproducing results, and barriers to comprehensive cross-model analysis. What's the project? We plan to address these challenges by developing a comprehensive standardized format for capturing the complete evaluation lifecycle. This format will provide a clear and extensible structure for documenting evaluation inputs (hyperparameters, prompts, datasets), outputs, metrics, and metadata. This standardization enables efficient storage, retrieval, sharing, and comparison of evaluation results across the AI research community. Building on this foundation, we will create a centralized repository with both raw data access and API interfaces that allow researchers to contribute evaluation runs and access cached results. The project will integrate with popular evaluation frameworks (LM-eval, HELM, Unitxt) and provide SDKs to simplify adoption. Additionally, we will populate the repository with evaluation results from leading AI models across diverse benchmarks, creating a valuable resource that reduces computational redundancy and facilitates deeper comparative analysis. Tasks? As a collaborator, you would be expected to: Work towards merging/integrating popular evaluation frameworks (LM-eval, HELM, Unitxt) Group 1 - Extend to Any Task: Design universal metadata schemas that work for ANY NLP task, extending beyond current frameworks like lm-eval/DOVE to support specialized domains (e.g., machine translation) Group 2 - Save the Relevant: Develop efficient query/download systems for accessing only relevant data subsets from massive repositories (DOVE: 2TB, HELM: extensive metadata) The result will be open infrastructure for the AI research community, plus an academic publication. When? We're looking for researchers who can join ASAP and work with us for at least 5 to 7 months. We are hoping to find researchers who would take this on as an active project (8 hours+/week) in this period.
forms.gle
Reposted by Srishti
oisinmacaodha.bsky.social
Please join us for the FGVC workshop at CVPR 2025 @cvprconference.bsky.social on Wed 11th of June. The full schedule and list of fantastic speakers can be found on our website:
sites.google.com/view/fgvc12
fgvcworkshop.bsky.social
Join us on June 11, 9am to discuss all things fine-grained!
We are looking forward to a series of talks on semantic granularity, covering topics such as machine teaching, interpretability and much more!
Room 104 E
Schedule & details: sites.google.com/view/fgvc12
@cvprconference.bsky.social #CVPR25
Reposted by Srishti
eleutherai.bsky.social
Can you train a performant language model using only openly licensed text?

We are thrilled to announce the Common Pile v0.1, an 8TB dataset of openly licensed and public domain text. We train 7B models for 1T and 2T tokens and match the performance similar models like LLaMA 1 & 2
Reposted by Srishti
neurograce.bsky.social
"Large [language] models should not be viewed primarily as intelligent agents but as a new kind of cultural and social technology, allowing humans to take advantage of information other humans have accumulated." henryfarrell.net/wp-content/u...
Reposted by Srishti
serge.belongie.com
Would you present your next NeurIPS paper in Europe instead of traveling to San Diego (US) if this was an option? Søren Hauberg (DTU) and I would love to hear the answer through this poll: (1/6)
NeurIPS participation in Europe
We seek to understand if there is interest in being able to attend NeurIPS in Europe, i.e. without travelling to San Diego, US. In the following, assume that it is possible to present accepted papers ...
docs.google.com
Reposted by Srishti
andrewdeck.bsky.social
"I don’t want to just be entering text prompts for the rest of my life."

I spoke to political cartoonists, including Pulitzer-winner Mark Fiore, about how they are using AI image generators in their work. My latest for @niemanlab.org.
www.niemanlab.org/2025/06/i-do...
“I don’t want to outsource my brain”: How political cartoonists are bringing AI into their work
Pulitzer-winning cartoonists are experimenting with AI image generators.
www.niemanlab.org
Reposted by Srishti
naitian.org
naitian @naitian.org · Feb 18
There's been a lot of work on "culture" in NLP, but not much agreement on what it is.

A position paper by me, @dbamman.bsky.social, and @ibleaman.bsky.social on cultural NLP: what we want, what we have, and how sociocultural linguistics can clarify things.

Website: naitian.org/culture-not-...

1/n
Culture is not trivia: sociocultural theory for cultural NLP. By Naitian Zhou and David Bamman from the Berkeley School of Information and Isaac L. Bleaman from Berkeley Linguistics.
Reposted by Srishti
sloeschcke.bsky.social
Check out our new preprint 𝐓𝐞𝐧𝐬𝐨𝐫𝐆𝐑𝐚𝐃.
We use a robust decomposition of the gradient tensors into low-rank + sparse parts to reduce optimizer memory for Neural Operators by up to 𝟕𝟓%, while matching the performance of Adam, even on turbulent Navier–Stokes (Re 10e5).
Reposted by Srishti
aicentre.dk
PhD student, Srishti Yadav and her collaborators, out with new, interdisciplinary work👇
srishtiy.bsky.social
I am excited to announce our latest work 🎉 "Cultural Evaluations of Vision-Language Models Have a Lot to Learn from Cultural Theory". We review recent works on culture in VLMs and argue for deeper grounding in cultural theory to enable more inclusive evaluations.

Paper 🔗: arxiv.org/pdf/2505.22793
Paper title "Cultural Evaluations of Vision-Language Models
Have a Lot to Learn from Cultural Theory"
Reposted by Srishti
mariaa.bsky.social
Check out our new paper led by @srishtiy.bsky.social and @nolauren.bsky.social! This work brings together computer vision, cultural theory, semiotics, and visual studies to provide new tools and perspectives for the study of ~culture~ in VLMs.
srishtiy.bsky.social
I am excited to announce our latest work 🎉 "Cultural Evaluations of Vision-Language Models Have a Lot to Learn from Cultural Theory". We review recent works on culture in VLMs and argue for deeper grounding in cultural theory to enable more inclusive evaluations.

Paper 🔗: arxiv.org/pdf/2505.22793
Paper title "Cultural Evaluations of Vision-Language Models
Have a Lot to Learn from Cultural Theory"
Reposted by Srishti
nolauren.bsky.social
A delight to work with great colleagues to bring theory around visual culture and cultural studies to how we think about visual language models.
srishtiy.bsky.social
I am excited to announce our latest work 🎉 "Cultural Evaluations of Vision-Language Models Have a Lot to Learn from Cultural Theory". We review recent works on culture in VLMs and argue for deeper grounding in cultural theory to enable more inclusive evaluations.

Paper 🔗: arxiv.org/pdf/2505.22793
Paper title "Cultural Evaluations of Vision-Language Models
Have a Lot to Learn from Cultural Theory"
srishtiy.bsky.social
We find that decades of visual cultural studies offer powerful ways to decode cultural meaning in images!! Rather than proposing yet another benchmark, our goal with this paper was to revisit and re-contextualize foundational theories of culture so that it can pave way for more inclusive frameworks.
srishtiy.bsky.social
We then propose 5 frameworks to evaluate cultures in VLMs:
1️⃣ Processual Grounding - who defines culture?
2️⃣ Material Culture - what is represented?
3️⃣ Symbolic Encoding - how is meaning layered?
4️⃣ Contextual Interpretation - who understands and frames meaning?
5️⃣ Temporality -when is culture situated?
srishtiy.bsky.social
In this paper, we call for integrating methods from 3 fields :
📚 Cultural Studies – how values, beliefs & identities are shaped through cultural forms like images.
🔍 Semiotics – how signs & symbols convey meaning
🎨 Visual Studies – how visuals communicate across time & place
srishtiy.bsky.social
Modern Vision-Language Models (VLMs) often fail at cultural understanding. But culture isn’t just recognizing things like food, clothes, rituals etc. It's how meaning is made and understood; it also about symbolism, context, and how these things evolve over time.
srishtiy.bsky.social
I am excited to announce our latest work 🎉 "Cultural Evaluations of Vision-Language Models Have a Lot to Learn from Cultural Theory". We review recent works on culture in VLMs and argue for deeper grounding in cultural theory to enable more inclusive evaluations.

Paper 🔗: arxiv.org/pdf/2505.22793
Paper title "Cultural Evaluations of Vision-Language Models
Have a Lot to Learn from Cultural Theory"
Reposted by Srishti
belongielab.org
This morning at P1 a handful of lucky of lab members got to see the telescope while centre secretary Björg had the dome open for a building tour 🔭 (1/7)
Reposted by Srishti
jiaangli.bsky.social
🚀New Preprint🚀
Can Multimodal Retrieval Enhance Cultural Awareness in Vision-Language Models?

Excited to introduce RAVENEA, a new benchmark aimed at evaluating cultural understanding in VLMs through RAG.
arxiv.org/abs/2505.14462

More details:👇