🔗 yongzx.github.io
📏 Our comprehensive survey reveals that there is still a long way to go.
📏 Our comprehensive survey reveals that there is still a long way to go.
What's the current stage and how to progress from here?
This work led by @yongzx.bsky.social has answers! 👇
📏 Our comprehensive survey reveals that there is still a long way to go.
What's the current stage and how to progress from here?
This work led by @yongzx.bsky.social has answers! 👇
Short Answer: Yes, thanks to “quote-and-think” + test-time scaling. You can even force them to reason in a target language!
But:
🌐 Low-resource langs & non-STEM topics still tough.
New paper: arxiv.org/abs/2505.05408
Short Answer: Yes, thanks to “quote-and-think” + test-time scaling. You can even force them to reason in a target language!
But:
🌐 Low-resource langs & non-STEM topics still tough.
New paper: arxiv.org/abs/2505.05408
New preprint!
@yongzx.bsky.social has all the details 👇
We observe that reasoning language models finetuned only on English data are capable of zero-shot cross-lingual reasoning through a "quote-and-think" pattern.
However, this does not mean they reason the same way across all languages or in new domains.
[1/N]
New preprint!
@yongzx.bsky.social has all the details 👇
We observe that reasoning language models finetuned only on English data are capable of zero-shot cross-lingual reasoning through a "quote-and-think" pattern.
However, this does not mean they reason the same way across all languages or in new domains.
[1/N]
We observe that reasoning language models finetuned only on English data are capable of zero-shot cross-lingual reasoning through a "quote-and-think" pattern.
However, this does not mean they reason the same way across all languages or in new domains.
[1/N]