Asher Zheng
@asher-zheng.bsky.social
160 followers 170 following 10 posts
PhD @ UT Linguistics Semantics/Pragmatics/NLP https://asherz720.github.io/ Prev.@UoEdinburgh @Hanyang
Posts Media Videos Starter Packs
Pinned
asher-zheng.bsky.social
Language is often strategic, but LLMs tend to play nice. How strategic are they really? Probing into that is key for future safety alignment.

👉Introducing CoBRA🐍, a framework that assesses strategic language.

Work with my amazing advisors @jessyjli.bsky.social and @David I. Beaver!
asher-zheng.bsky.social
Current LLMs are not able to jailbreak cooperative principles and still show limited understanding of strategic language. We believe this work lays foundations for sophisticated strategic reasoning and safety monitoring in downstream tasks.
📄: arxiv.org/abs/2506.01195
CoBRA: Quantifying Strategic Language Use and LLM Pragmatics
Language is often used strategically, particularly in high-stakes, adversarial settings, yet most work on pragmatics and LLMs centers on cooperativity. This leaves a gap in systematic understanding of...
arxiv.org
asher-zheng.bsky.social
By analyzing model reasoning, we find extra reasoning introduces overcomplication (img left), misunderstanding, and internal inconsistency (img right). This shows the current LLMs still lack sophisticated pragmatic understanding in many ways.
asher-zheng.bsky.social
We evaluate a range of LLMs in terms of how good they are at perceiving strategic language. We show models struggle with our metrics while showing an overall good understanding of Gricean principles. Model size tends to have a positive effect, while reasoning does not help.
asher-zheng.bsky.social
(2) BaT and PaT are valid terms that reflect strategic gains/losses, which can to some extent predict conversational outcomes. In addition, our metrics are more objective. When conditioned on cases where the outcome is made based on logical arguments, the predictive power rises.
asher-zheng.bsky.social
We also introduce CHARM, an annotated dataset of real legal cross-examination dialogues. By applying our framework, we show (1) (non-)cooperative discourse are distinct over the identified properties (img left), and BaT and PaT show such a distributional distinction (img right).
asher-zheng.bsky.social
Based on the components above, we introduce three metrics—Benefit at Turn (BaT), Penalty at Turn (PaT), and Normalized Relative Benefit at Turn (NRBaT)—to measure the strategic gains, losses, and cumulative benefits at a turn.
asher-zheng.bsky.social
For example, one witness can make a commitment that leads to a win for her but violates the maxim of manner to make her less liable to the commitment. The commitment itself is beneficial, but the gains are reduced due to vagueness.
asher-zheng.bsky.social
We derive non-cooperativity from both Gricean and game-theoretic pragmatics. In our framework, a strategic move is evaluated based on two components: the commitment it expresses (base value) and the violation of maxims to maintain consistency (penalties/compensations).
asher-zheng.bsky.social
Language is often strategic, but LLMs tend to play nice. How strategic are they really? Probing into that is key for future safety alignment.

👉Introducing CoBRA🐍, a framework that assesses strategic language.

Work with my amazing advisors @jessyjli.bsky.social and @David I. Beaver!
Reposted by Asher Zheng
ramyanamuduri.bsky.social
Have that eerie feeling of déjà vu when reading model-generated text 👀, but can’t pinpoint the specific words or phrases 👀?

✨We introduce QUDsim, to quantify discourse similarities beyond lexical, syntactic, and content overlap.