Adrian Chan
@gravity7.bsky.social
750 followers
620 following
410 posts
Bridging IxD, UX, & Gen AI design & theory. Ex Deloitte Digital CX. Stanford '88 IR. Edinburgh, Berlin, SF. Philosophy, Psych, Sociology, Film, Cycling, Guitar, Photog. Linkedin: adrianchan. Web: gravity7.com. Insta, X, medium: @gravity7
Posts
Media
Videos
Starter Packs
Adrian Chan
@gravity7.bsky.social
· Jun 9
Flattery, Fluff, and Fog: Diagnosing and Mitigating Idiosyncratic Biases in Preference Models
Language models serve as proxies for human preference judgements in alignment and evaluation, yet they exhibit systematic miscalibration, prioritizing superficial patterns over substantive qualities. ...
arxiv.org
Adrian Chan
@gravity7.bsky.social
· Jun 3
Adrian Chan
@gravity7.bsky.social
· May 21
When Thinking Fails: The Pitfalls of Reasoning for Instruction-Following in LLMs
Reasoning-enhanced large language models (RLLMs), whether explicitly trained for reasoning or prompted via chain-of-thought (CoT), have achieved state-of-the-art performance on many complex reasoning ...
www.arxiv.org
Adrian Chan
@gravity7.bsky.social
· May 16
The CoT Encyclopedia: Analyzing, Predicting, and Controlling how a Reasoning Model will Think
Long chain-of-thought (CoT) is an essential ingredient in effective usage of modern large language models, but our understanding of the reasoning strategies underlying these capabilities remains limit...
arxiv.org
Adrian Chan
@gravity7.bsky.social
· May 14
Clarifying the Path to User Satisfaction: An Investigation into Clarification Usefulness
Clarifying questions are an integral component of modern information retrieval systems, directly impacting user satisfaction and overall system performance. Poorly formulated questions can lead to use...
arxiv.org
Adrian Chan
@gravity7.bsky.social
· May 14
Backtracing: Retrieving the Cause of the Query
Many online content portals allow users to ask questions to supplement their understanding (e.g., of lectures). While information retrieval (IR) systems may provide answers for such user queries, they...
arxiv.org
Adrian Chan
@gravity7.bsky.social
· May 14
Adrian Chan
@gravity7.bsky.social
· May 14
Style Vectors for Steering Generative Large Language Model
This research explores strategies for steering the output of large language models (LLMs) towards specific styles, such as sentiment, emotion, or writing style, by adding style vectors to the activati...
arxiv.org
Adrian Chan
@gravity7.bsky.social
· May 14
Adrian Chan
@gravity7.bsky.social
· May 14
Adrian Chan
@gravity7.bsky.social
· May 14
Adrian Chan
@gravity7.bsky.social
· May 14
Attention Mechanisms Perspective: Exploring LLM Processing of Graph-Structured Data
Attention mechanisms are critical to the success of large language models (LLMs), driving significant advancements in multiple fields. However, for graph-structured data, which requires emphasis on to...
www.arxiv.org
Adrian Chan
@gravity7.bsky.social
· May 12
Adrian Chan
@gravity7.bsky.social
· May 12
Adrian Chan
@gravity7.bsky.social
· May 12
Triggering Hallucinations in LLMs: A Quantitative Study of Prompt-Induced Hallucination in Large Language Models
Hallucinations in large language models (LLMs) present a growing challenge across real-world applications, from healthcare to law, where factual reliability is essential. Despite advances in alignment...
arxiv.org
Adrian Chan
@gravity7.bsky.social
· May 10
Adrian Chan
@gravity7.bsky.social
· May 10
Adrian Chan
@gravity7.bsky.social
· May 10
Adrian Chan
@gravity7.bsky.social
· May 10
Adrian Chan
@gravity7.bsky.social
· May 10
Adrian Chan
@gravity7.bsky.social
· May 10
Adrian Chan
@gravity7.bsky.social
· May 10
Adrian Chan
@gravity7.bsky.social
· May 10