Jirui Qi
@jiruiqi.bsky.social
82 followers 59 following 31 posts
Ph.D Candidate @GroNLP, University of Groningen #NLProc https://betswish.github.io
Posts Media Videos Starter Packs
Pinned
jiruiqi.bsky.social
[1/]💡New Paper
Large reasoning models (LRMs) are strong in English — but how well do they reason in your language?

Our latest work uncovers their limitation and a clear trade-off:
Controlling Thinking Trace Language Comes at the Cost of Accuracy

📄Link: arxiv.org/abs/2505.22888
jiruiqi.bsky.social
Our paper on multilingual reasoning is accepted to Findings of #EMNLP2025! 🎉 (OA: 3/3/3.5/4)

We show SOTA LMs struggle with reasoning in non-English languages; prompt-hack & post-training improve alignment but trade off accuracy.

📄 arxiv.org/abs/2505.22888
See you in Suzhou! #EMNLP
Reposted by Jirui Qi
gsarti.com
📢 New paper: Can unsupervised metrics extracted from MT models detect their translation errors reliably? Do annotators even *agree* on what constitutes an error? 🧐

We compare uncertainty- and interp-based WQE metrics across 12 directions, with some surprising findings!

🧵 1/
jiruiqi.bsky.social
[11/] Besides, increasing the instances doesn't reliably mitigate the issue. When increasing from 100 to 250 training instances, the post-trained LRMs suffer from a drop in matching rate, while accuracy exhibits only marginal recovery, far below the accuracy of the original LRM.
jiruiqi.bsky.social
[10/] The results show that post-training on merely 100 instances sharply increases the matching rate to nearly 100% for TH and TE and to 80% for JA, but decreases accuracy, demonstrating the effectiveness of post-training to improve language matching, but the trade-off persists.
jiruiqi.bsky.social
[9/] To see whether further training can help, we post-train on Distilled-R1-7B using mini training sets of 100 or 250 instances per poor-matching language (Japanese, Thai, Telugu), resulting in six post-trained LRMs. The training data are filtered and translated from LIMO.
jiruiqi.bsky.social
[8/] Corresponding to the heatmaps, we further analyze the actual thinking languages of the LRM, where a clear mismatch is observed. Besides, all mismatches (i.e., red marks) fall into English or Chinese, suggesting the impact of thinking data on the model’s reasoning capability.
jiruiqi.bsky.social
[7/] Interestingly, reasoning in English consistently results in higher accuracy, especially after prompt hacking. This aligns with concurrent work on improving answer accuracy via cross-lingual reasoning, supporting the reliability of our experiments and XReasoning benchmark.
jiruiqi.bsky.social
[6/] Heatmaps by query/thinking language show the 32B LRM fails to generate traces in the prompted language—e.g., asked to think in FR, it defaults to EN. Motivating LRM to reason with hacking increases the matching from 46% to 98%, but introduces a noticeable accuracy decrement.
jiruiqi.bsky.social
[5/] Overall, LRMs struggle to follow instructions to think in user-specified languages with standard prompts. Motivating LRMs to generate traces in user query language with prompt hacking boosts language matching, but decreases accuracy, which shrinks as model size increases.
jiruiqi.bsky.social
[4/] Besides the standard prompting with explicitly specified thinking language in the instruction, we introduce and leverage the prompt hacking technique to induce the LRM to generate the thinking traces in the user-expected languages.
jiruiqi.bsky.social
[3/] We comprehensively evaluate six SOTA LRMs belonging to two families: Distilled-R1 and Skywork-OR1. Due to the lack of multilingual reasoning datasets, we introduce a novel benchmark named XReasoning, covering easy MGSM and translated challenging AIME2024, AIME2025, and GPQA_Diamond.
jiruiqi.bsky.social
[2/] The matching of thinking language is as important as accuracy because it makes the traces more readable and easier for users to verify. Even correct answers can feel untrustworthy if users can’t understand how the model gets there, especially as task complexity increases.
jiruiqi.bsky.social
[1/]💡New Paper
Large reasoning models (LRMs) are strong in English — but how well do they reason in your language?

Our latest work uncovers their limitation and a clear trade-off:
Controlling Thinking Trace Language Comes at the Cost of Accuracy

📄Link: arxiv.org/abs/2505.22888
jiruiqi.bsky.social
[8/] Taken together, our findings reveal the LLMs' capability of consistently utilizing multilingual contexts, with a barrier in decoding answers in the user language. These deepen the understanding of how LLMs work in mRAG systems, providing directions for future improvements.
jiruiqi.bsky.social
[7/] Including distractors, our analysis with both accuracy and feature attribution techniques further shows that distracting passages negatively impact answer quality regardless of their language. However, distractors in the query language exert a slightly stronger influence.
jiruiqi.bsky.social
[6/] This finding suggests that generating in the target language is the major bottleneck, which could dominate, if not hide, the effect of similarity with the passage language.
jiruiqi.bsky.social
[5/] Detailed heatmaps further showcase that answer accuracy is relatively consistent within each row, more so than within each column. In other words, the query language is much more predictive of accuracy than the passage language.
jiruiqi.bsky.social
[4/] Our experiments with 4 LLMs across 3 QA datasets, covering 48 languages, reveal a surprising ability of LLMs to extract relevant information from passages in different languages than the query, but a weaker ability to formulate an answer in the correct language (shading bars).
jiruiqi.bsky.social
[3/] Through accuracy and feature attribution analysis, we assess LLMs’ ability to make consistent use of a relevant passage regardless of its language, respond in expected languages, and focus on relevant passages even when distractors in different languages are provided.
jiruiqi.bsky.social
[2/] Multilingual RAG (mRAG) has been shown to be beneficial, particularly for low-resource languages. However, the extent to which LLMs can leverage multilingual contexts to generate accurate answers, independently from retrieval quality, remains understudied.
jiruiqi.bsky.social
✨ New Paper ✨
[1/] Retrieving passages from many languages can boost retrieval augmented generation (RAG) performance, but how good are LLMs at dealing with multilingual contexts in the prompt?

📄 Check it out: arxiv.org/abs/2504.00597
(w/ @arianna-bis.bsky.social @Raquel_Fernández)

#NLProc
jiruiqi.bsky.social
Many thanks to all collaborators for their contributions!
Tianyu Liu, Paul He, Arianna Bisazza, @mrinmaya.bsky.social, Ryan Cotterell.
jiruiqi.bsky.social
[8/8] 🌟Take-home msg: p(question) can gauge LM performance in RAG QA.

Considering we are taking the first step to prompt optimization without LM decoding, we follow the previous setup and mainly adopt document reordering. Thus, other prompt modifications are left for the future.