Sepideh Mamooler@ACL🇦🇹
banner
smamooler.bsky.social
Sepideh Mamooler@ACL🇦🇹
@smamooler.bsky.social
PhD Candidate at @icepfl.bsky.social | Ex Research Intern@Google DeepMind
👩🏻‍💻 Working on multi-modal AI reasoning models in scientific domains
https://smamooler.github.io/
🙏 Amazing collaboration with my co-authors and advisors
@smontariol.bsky.social, @abosselut.bsky.social,
@trackingskills.bsky.social
December 17, 2024 at 2:51 PM
📖 Check out the full paper here: arxiv.org/pdf/2412.11923
arxiv.org
December 17, 2024 at 2:51 PM
📊 We evaluate PICLe on 5 biomedical NED datasets and find:
✨ With zero human annotations, PICLe outperforms ICL in low-resource settings, where limited gold examples can be used as in-context demonstrations!
December 17, 2024 at 2:51 PM
⚙️ How does PICLe work?
1️⃣ LLMs annotate demonstrations in a zero-shot first pass.
2️⃣ Synthetic demos are clustered, and in-context sets are sampled.
3️⃣ Entity mentions are predicted using each set independently.
4️⃣ Self-verification selects the final predictions.
December 17, 2024 at 2:51 PM
💡 Building on our findings, we introduce PICLe: a framework for in-context learning powered by noisy, pseudo-annotated demonstrations. 🛠️ No human labels, no problem! 🚀
December 17, 2024 at 2:51 PM
📊 Key finding: A semantic mapping between demonstration context and label is essential for in-context task transfer. BUT even weak semantic mappings can provide enough signal for effective adaptation in NED!
December 17, 2024 at 2:51 PM
🔍 It’s unclear which demonstration attributes enable in-context learning in tasks that require structured, open-ended predictions (such as NED).
We use perturbation schemes that create demonstrations with varying correctness levels to analyze key demonstration attributes.
December 17, 2024 at 2:51 PM