Jablonka Lab (Lab for AI for Materials)
@jablonkagroup.bsky.social
29 followers 3 following 20 posts
Team-run account for the group led by @kjablonka.com
Posts Media Videos Starter Packs
jablonkagroup.bsky.social
Just as human chemists learn through diverse materials and experiences—textbooks, laboratory work, research papers, and problem-solving —ChemPile’s varied content types aim to provide a comprehensive learning
arXiv: arxiv.org/pdf/2505.12534
read more: chempile.lamalab.org
arxiv.org
jablonkagroup.bsky.social
We introduce the ChemPile, the largest natural language chemistry dataset (>75B tokens).
dataset: huggingface.co/collections/...
jablonkagroup.bsky.social
Training large language models for chemistry is bottlenecked by one critical problem: there is no unified dataset that connects all chemical domains.
jablonkagroup.bsky.social
We're excited to present our posters today at the AI4Mat workshop at #ICLR25 #AI4Mat #Singapore
jablonkagroup.bsky.social
LAMA Lab at ICLR in Singapore!
#iclr2025 #singapore #AI #ML #chemistry #iclr
jablonkagroup.bsky.social
we're ready for spring! team building is always more fun when it's outside ☀️
jablonkagroup.bsky.social
Day 1 of the Foundation Models workshop hosted by the ELLIS Winter School!
jablonkagroup.bsky.social
 Not sure where to start? Our documentation has step-by-step guides for every scenario
lamalab-org.github.io/chembench/
jablonkagroup.bsky.social
✨Public Datasets & Leaderboard – All datasets are live on HuggingFace, alongside a real-time performance leaderboard! huggingface.co/datasets/jab...
jablonkagroup.bsky.social
What's new?
✨Multimodal Support – Handle text, data, and chemistry-specific inputs seamlessly
✨Redesigned API – Now standardized on LiteLLM messages for effortless integration
✨Custom System Prompts – Tailor benchmarks to your unique use case
jablonkagroup.bsky.social
🚀ChemBench just leveled up!
We’re thrilled to announce the latest release of ChemBench—now smarter and smoother! Dive into benchmarking any chemistry AI model with our revamped framework, designed for flexibility and ease.
#ChemistryAI #MachineLearning #OpenScience #Innovation
jablonkagroup.bsky.social
🌟LLM limitations persist: Still lagging in 3D molecular spatial reasoning
#LLMs #MachineLearning #OpenScience
jablonkagroup.bsky.social
🌟System prompt insights: Ablation studies show no effect on evaluation outcomes
🌟VLLMs dominate: Outperform specialized models like Decimer in benchmarks
jablonkagroup.bsky.social
🚀Our revised MaCBench paper is now on arxiv! arxiv.org/pdf/2411.16955

Key updates!
🌟Robust reproducibility: 5x experiment runs + error bars for statistical confidence
🌟Full dataset & leaderboard: Now live on HuggingFace with model comparisons huggingface.co/spaces/jablo...
Fig A: Bar plot of model performance comparison with error bars
Fig B: Radar plot of relative performance for each model for each subtopic MaCBench leaderboard hosted on HuggingFace spaces
jablonkagroup.bsky.social
For instance, one would expect vision models to perform very well and better than text models on spatial reasoning - such as identifying the correct isomeric relation between two compounds.

But this is not the case!
jablonkagroup.bsky.social
But we did not stop there! We dug deeper with ablations to understand the bottlenecks in applicability.
We compared different modalities, multi-step vs single step reasoning, guided prompting, etc.
jablonkagroup.bsky.social
We observed a striking disparity in performance across tasks. Models can identify lab equipment but struggle with identifying safety violations in real-life laboratory scenarios.
jablonkagroup.bsky.social
We and M3RG-Group from IIT Delhi created MaCBench: a multimodal materials and chemistry benchmark. (2137 questions)

We focus on the tasks we consider crucial for scientific development, practical lab scenarios, Spectral Analysis, US patents, and more.
jablonkagroup.bsky.social
Are Vision Language Models ready for scientific research?
🧑‍🔬🧪

We compared leading VLLMs on the three pillars of chemical and material science discovery: data extraction, lab experimentation and data interpretation.
arxiv.org/abs/2411.16955