Tejas Srinivasan
@tejassrinivasan.bsky.social
290 followers 160 following 22 posts
CS PhD student at USC. Former research intern at AI2 Mosaic. Interested in human-AI interaction and language grounding.
Posts Media Videos Starter Packs
Pinned
tejassrinivasan.bsky.social
People are increasingly relying on AI assistance, but *how* they use AI advice is influenced by their trust in the AI, which the AI is typically blind to. What if they weren’t?

We show that adapting AI assistants' behavior to user trust mitigates under- and over-reliance!

arxiv.org/abs/2502.13321
tejassrinivasan.bsky.social
🚨Reminder: Submissioms for the ORIGen workshop at COLM are due today!!! 🚨

CfP: origen-workshop.github.io/submissions/

OpenReview submission page: openreview.net/group?id=col...
tejassrinivasan.bsky.social
LLMs are all around us, but how can we foster reliable and accountable interactions with them??

To discuss these problems, we will host the first ORIGen workshop at @colmweb.org! Submissions welcome from NLP, HCI, CogSci, and anything human-centered, due June 20 :)

origen-workshop.github.io
ORIGen 2025
Workshop on Optimal Reliance and Accountability in Interactions with Generative LMs
origen-workshop.github.io
tejassrinivasan.bsky.social
I'm trying to make "bleet" a thing
Reposted by Tejas Srinivasan
thomason.bsky.social
This month, @jessezhang.bsky.social completed his PhD defense and signed to start a postdoc with @abhishekunique7.bsky.social at UW! Keep an eye on his journey :) www.jessezhang.net
I'm sad to lose one of my sinistral students but glad to produce another Dr. Jesse 😛
Jesse Thomason and Jesse Zhang in their respective PhD robes.
tejassrinivasan.bsky.social
The only silver lining of my ACL rejection is that I have something to submit to EMNLP
tejassrinivasan.bsky.social
LLMs are all around us, but how can we foster reliable and accountable interactions with them??

To discuss these problems, we will host the first ORIGen workshop at @colmweb.org! Submissions welcome from NLP, HCI, CogSci, and anything human-centered, due June 20 :)

origen-workshop.github.io
ORIGen 2025
Workshop on Optimal Reliance and Accountability in Interactions with Generative LMs
origen-workshop.github.io
tejassrinivasan.bsky.social
This! So much this!!!
markriedl.bsky.social
AI can do so much more. Instead of seeing aging as a problem to sweep under the rug, we should be designing AI to facilitate meaningful connections for all.
Reposted by Tejas Srinivasan
markriedl.bsky.social
Nothing says “I love you” like outsourcing your parents’ phone calls to a chatbot. 🙃 Social isolation in aging is real. Connection isn’t something you can automate.

Why does everyone think we can just throw a chatbot at every problem?

www.404media.co/i-tested-the...
I Tested The AI That Calls Your Elderly Parents If You Can't Be Bothered
inTouch says on its website "Busy life? You can’t call your parent every day—but we can." My own mum said she would feel terrible if her child used it.
www.404media.co
tejassrinivasan.bsky.social
Ty for the plug 🙏
Model confidence is a good decision aid (arxiv.org/pdf/2001.02114), while explanations are less useful and can cause over-reliance (arxiv.org/abs/2310.12558, arxiv.org/pdf/2406.19170). Other interaction cues like AI warmth can also make a difference (arxiv.org/abs/2407.07950).
Reposted by Tejas Srinivasan
jameeljaffer.bsky.social
Arresting and threatening to deport students because of their participation in political protest is the kind of action one ordinarily associates with the world’s most repressive regimes. It’s genuinely shocking that this appears to be what’s going on right here. 1/
tejassrinivasan.bsky.social
What do you mean by core capabilities, for VLMS? IMO core capabilities should be determined by the applications we care about, and I'd argue medical use cases are as important (if not more) as MSCOCO-style images/scenes
Reposted by Tejas Srinivasan
mmitchell.bsky.social
I worry that concerns with "superintelligence" are being blurred with concerns around *ceding human control*.
A "SuperDumb" system can create mutually assured destruction. What it takes is allowing AI systems to execute code autonomously in military operations.
mmitchell.bsky.social
AI real talk. We (humanity) are moving full speed ahead at building AI agents for war that can create a runaway missile crisis of mutually assured destruction globally.

Is the option of not allowing AI agents to deploy missiles already off the table, or is that still up for discussion?
Scale AI announces multimillion-dollar defense deal, a major step in U.S. military automation
Spearheaded by the Defense Innovation Unit, the Thunderforge program will work with Anduril, Microsoft and others to develop and deploy AI agents.
www.cnbc.com
Reposted by Tejas Srinivasan
maxkennerly.bsky.social
"The first guest on Gavin Newsom's podcast was Charlie Kirk" is more than enough for me to say "absolutely not" to any suggestion Newsom play any role in the future of the Democratic Party. People like him are the past, the failures, the ones who got us here.
atherton.bsky.social
Gavin Newsom would have put down the Bell Riots with tanks and napalm I can tell you that much
Gavin Newsom v @GavinNewsom
Follow
Make sure to tune in TOMORROW for the first episode of my new podcast
→
linktr.ee/govgavinnewsom...
Charlie Kirk
We had quite the chat!
tejassrinivasan.bsky.social
What are you using o1pro for? And in what aspects do you think it's better than other LLMs?
tejassrinivasan.bsky.social
Is this advice you reserve for a particular class of problems, or is it just generally applicable because we still don't know the full breadth of LLM capabilities?
tejassrinivasan.bsky.social
I'm always three days away from being three days away
tejassrinivasan.bsky.social
We hope our work inspires the community to more closely consider how user characteristics, including but not limited to trust, affect how people rely on AI assistance.

Work done with the always-awesome @thomason.bsky.social!
tejassrinivasan.bsky.social
Improving AI reliability is more important than ever as AI systems are increasingly deployed in real-world settings with high stakes. We believe it is important for AI researchers to think about the user-AI dyad 🧑🤖, rather than just the AI in a vacuum.
tejassrinivasan.bsky.social
These findings show that being able to estimate users’ trust levels can enhance human-AI collaboration 💪 but we also find that modeling user trust is very challenging! 😓 Our work reveals promising new directions for user modeling that extend beyond merely learning user preferences.
tejassrinivasan.bsky.social
We show that adapting AI behavior to user trust levels, by showing AI explanations during moments of low trust and counter-explanations during high trust, effectively mitigates inappropriate reliance and improves decision accuracy! These improvements are also seen with other intervention strategies.
tejassrinivasan.bsky.social
In two decision-making tasks, we find that low and high user trust levels worsen under-reliance and over-reliance on AI recommendations, respectively 💀💀💀

Can the AI assistant do something differently when user trust is low/high to prevent such inappropriate reliance? Yes!
tejassrinivasan.bsky.social
People are increasingly relying on AI assistance, but *how* they use AI advice is influenced by their trust in the AI, which the AI is typically blind to. What if they weren’t?

We show that adapting AI assistants' behavior to user trust mitigates under- and over-reliance!

arxiv.org/abs/2502.13321
tejassrinivasan.bsky.social
Do each of these correspond to a particular conf deadline? I'm guessing
May: EMNLP
July: AACL?
Oct: EACL/NAACL
Feb: ACL
Reposted by Tejas Srinivasan
merterm.bsky.social
‼️ Ever wish LLMs would just... slow down for a second?

In our latest work, "Better Slow than Sorry: Introducing Positive Friction for Reliable Dialogue Systems", we delve into how strategic delays can enhance dialogue systems.

Paper Website: merterm.github.io/positive-fri...
Reposted by Tejas Srinivasan
noupside.bsky.social
“Toward the end of the November dinner, Trump raised the matter of the lawsuit, the people said. The president signaled that the litigation had to be resolved before Zuckerberg could be “brought into the tent,” one of the people said.”

They’re in the tent now. Cowards.
jeffhorwitz.bsky.social
Meta has agreed to pay Trump $25 million in damages to settle a lawsuit alleging that removing Trump from the platform was illegal. Message was that the money needed to be paid before Meta could be "in the tent." Never got around to filing an amended complaint. www.wsj.com/us-news/law/...
Exclusive | Trump Signs Agreement Calling for Meta to Pay $25 Million to Settle Suit
The president had sued the social-media company after his accounts were suspended.
www.wsj.com