zhouxiangfang.bsky.social
@zhouxiangfang.bsky.social
🤔 To what extent can Large Language Models (LLMs) solve unseen tasks via In-context Learning (ICL) ?

💥 We propose a new general framework ✨ ICL ciphers ✨ to quantify ``learning'' in ICL via substitution ciphers.

🔗 :
Check our new preprint!
arxiv.org/abs/2504.19395
ICL CIPHERS: Quantifying "Learning'' in In-Context...
Recent works have suggested that In-Context Learning (ICL) operates in dual modes, i.e. task retrieval (remember learned patterns from pre-training) and task learning (inference-time ``learning''...
arxiv.org
April 29, 2025 at 4:18 PM