Parameter Lab
@parameterlab.bsky.social
28 followers 6 following 34 posts
Empowering individuals and organisations to safely use foundational AI models. https://parameterlab.de
Posts Media Videos Starter Packs
parameterlab.bsky.social
🫗 An LLM's "private" reasoning may leak your sensitive data!

🎉 Excited to share our paper "Leaky Thoughts: Large Reasoning Models Are Not Private Thinkers" was accepted at #EMNLP main!

1/2
Overall diagram about contextual privacy & LRMs
parameterlab.bsky.social
Work done with: Haritz Puerto, Martin Gubri ‪‪@mgubri.bsky.social‬ , Tommaso Green, Sangdoo Yun and Seong Joon Oh @coallaoh.bsky.social
#SEO #AI #LLM #GenerativeAI #Marketing #DigitalMarketing #Perplexity #NLProc
parameterlab.bsky.social
Key takeaways:
❌ C-SEO doesn’t help improve visibility in AI answers.
🔎 Traditional SEO is your tool for online visibility.
🚀 Our benchmark sets the stage to develop C-SEO methods that might work in the future.
parameterlab.bsky.social
🔎 The results are clear: current C-SEO strategies don’t work. This challenges the recent hype and suggests that creators don’t need to game LLMs and create even more clickbaits. Just focus on producing genuinely good content and let traditional SEO do its work.
parameterlab.bsky.social
C-SEO Bench evaluates Conversational Search Engine Optimization (C-SEO) techniques on two key tasks:
🔍 Product Recommendation
❓ Question Answering
Spanning multiple domains, it tests both domain-specific performance and the generalization of C-SEO methods.
parameterlab.bsky.social
💥 With the rise of conversational search, a new technique of "Conversational SEO" (C-SEO) emerged, claiming it can boost content inclusion in AI-generated answers. We put these claims to the test by building C-SEO Bench, the first comprehensive benchmark to rigorously evaluate these new strategies.
 Illustration of a conversational search engine for product recommendation. After applying a C-SEO method on the third document, its ranking gets boosted by +2 positions.
parameterlab.bsky.social
🔎Does Conversational SEO actually work? Our new benchmark has an answer!
Excited to announce our new paper: C-SEO Bench: Does Conversational SEO Work?

🌐 RTAI: researchtrend.ai/papers/2506....
📄 Paper: arxiv.org/abs/2506.11097
💻 Code: github.com/parameterlab...
📊 Data: huggingface.co/datasets/par...
Paper thumbnail.
parameterlab.bsky.social
Excited to share that our paper "Scaling Up Membership Inference: When and How Attacks Succeed on LLMs" will be presented next week at #NAACL2025!
🖼️ Catch us at Poster Session 8 - APP: NLP Applications
🗓️ May 2, 11:00 AM - 12:30 PM
🗺️ Hall 3
Hope to see you there!
mgubri.bsky.social
📄 Excited to share our latest paper on the scale required for successful membership inference in LLMs! We investigate a continuum from single sentences to large document collections. Huge thanks to an incredible team: Haritz Puerto, @coallaoh.bsky.social and @oodgnas.bsky.social!
Main figure of the paper
parameterlab.bsky.social
Ready to Join? Send your resume + a short note on why you’re a great fit to [email protected].
Be part of a team that’s redefining research with AI! #Hiring #DataEngineer #AI #RemoteJobs
parameterlab.bsky.social

Why Join Us?
🚀 Make a Difference – Your work directly enhances how research is shared and discovered.
🌍 Flexibility – Choose full-time or part-time, work remotely or locally.
⚡ Innovative Environment – AI, research, and data-driven solutions all in one place.
🤝 Great Team
parameterlab.bsky.social
What You Bring:
✅ Proficiency in Airflow & PostgreSQL – Complex workflows and databases.
✅ Strong Python Skills – Clean, efficient, and maintainable code is your thing.
✅ (Bonus) Experience with LLMs – A huge plus as we integrate AI-driven solutions.
✅ Problem-Solving Mindset
✅ Team Spirit
parameterlab.bsky.social
What You’ll Do:
✔ Build Scalable Data Pipelines – Design and optimize workflows using tools like Airflow.
✔ Work Closely with AI Experts & Engineers – Collaborate to solve real-world data challenges.
✔ Optimize and Maintain Systems – Keep our data infrastructure fast, secure, and adaptable.
parameterlab.bsky.social
Our LLM-powered ecosystem also bridges the gap between cutting-edge research and industry leaders. If you're passionate about data, AI, and making an impact, we’d love to have you on board!
parameterlab.bsky.social
👥 We're Hiring: Senior/Junior Data Engineer!

📍 Remote or Local | Full-Time or Part-Time

At ResearchTrend.AI, we’re building a platform that connects researchers and AI engineers worldwide—helping them stay ahead with daily digests, insightful summaries, and interactive events.
ResearchTrend.AI
Explore the most trending research topics in AI
ResearchTrend.AI
parameterlab.bsky.social
🔎 Wonder how to prove an LLM was trained on a specific text? The camera ready of our Findings of #NAACL 2025 paper is available!
📌 TLDR: longs texts are needed to gather enough evidence to determine whether specific data points were included in training of LLMs: arxiv.org/abs/2411.00154
parameterlab.bsky.social
We are delighted to announce that our research paper on the scale of LLM membership inference has been accepted for publication in the Findings of #NAACL2025! 🎉
mgubri.bsky.social
📄 Excited to share our latest paper on the scale required for successful membership inference in LLMs! We investigate a continuum from single sentences to large document collections. Huge thanks to an incredible team: Haritz Puerto, @coallaoh.bsky.social and @oodgnas.bsky.social!
Main figure of the paper
parameterlab.bsky.social
🎉We’re pleased to share the release of the models from our Apricot🍑 paper, accepted at ACL 2024!
At Parameter Lab, we believe openness and reproducibility are essential for advancing science, and we've put in our best effort to ensure it.
🤗 huggingface.co/collections/...
🧵 bsky.app/profile/dnns...
parameterlab.bsky.social
🙌 Team Credits: This research was conducted by Haritz Puerto @mgubri.bsky.social @oodgnas.bsky.social and @coallaoh.bsky.social with support from NAVER AI Lab. Stay tuned for more updates! 🚀
parameterlab.bsky.social
🤓 Want More? Check out the community page of MIA for LLMs in http://ReserachTrend.AI https://researchtrend.ai/communities/MIALM You can see related works, the evolution of the community, and top authors!
parameterlab.bsky.social
💬 What Do You Think? Could MIA reach a level where data owners use it as legal evidence? How might this affect LLM deployment? Let us know! #AI #LLM #NLProc
parameterlab.bsky.social
🌐 Implications for Data Privacy: Our findings have real-world relevance for data owners worried about unauthorized use of their content in model training. It can also be used to support accountability of LLM evaluation in end-tasks.
parameterlab.bsky.social
🔎 Better Results in Fine-Tuning: Fine-tuned models show even stronger MIA results. The table shows the performance at sentence level and for collections of 20 sentences, evaluated on Phi-2 fine-tuned for QA (https://huggingface.co/haritzpuerto/phi-2-dcot ).