David van Dijk
@vandijklab.bsky.social
93 followers 46 following 51 posts
Learning the rules of life. Assistant Professor of Medicine and Computer Science @ Yale
Posts Media Videos Starter Packs
vandijklab.bsky.social
Right. We have done something similar in our previous work (cinema-ot) where we validated casual inferences using synthetic data where we know the ground truth.
vandijklab.bsky.social
Right, and I do believe this is possible based on other experiments we have done where we translate between biological language and natural language. Your proposed experiment may be more specific and I’m interested in trying it.
vandijklab.bsky.social
Zero shot is possible but obviously much harder and also very much depends on the specific system.
vandijklab.bsky.social
We have focussed on fine tuning on one immune cell cytokine stim dataset and on (bulk) L1000. In both cases we show generalization by leaving out conditions (eg cytokine combos).
vandijklab.bsky.social
And the reasoning here is that if they improve then that shows that our model generates meaningful data? That’s interesting. It’s a convenient way of validating without doing experiments I guess
vandijklab.bsky.social
I see. We haven’t done this specific experiment where we compare well studied vs poorly studied genes. It’s an interesting idea. We will look into it. I would expect that genes/cell types/tissues that have a lot of training data, both expression and meta data, generalize better.
vandijklab.bsky.social
Yes. We showed that natural language pretraining vs training on cell sentences from scratch, significantly boosts performance.
In addition, in the spatial reasoning task (fig.6) we did an ablation where we trained with and without metadata. With metadata performed significantly better.
vandijklab.bsky.social
Finally, asking a model to generate a “cell sentence” (e.g. for perturbation response prediction) is novel by design, since no LLM has encountered that representation in its training data.
vandijklab.bsky.social
Second, several test sets—such as Dataset Interpretation on held-out studies—use scRNA-seq datasets published after each model’s pretraining cutoff, giving us strong assurance that those examples weren’t seen during training.
vandijklab.bsky.social
We took several steps to ensure robust evaluation. First, we tested both open- and closed-source LLMs (GPT-4o, Gemini, LLaMA-3) on our benchmarks and found they perform poorly out of the box, indicating minimal overlap with pretraining corpora.
vandijklab.bsky.social
For this paper, we chose a prompt structure that helps the model learn perturbations effectively, but initial tests suggest the model handles prompt variations well as long as the data formatting is consistent—so we don't expect prompt engineering to be a major issue.
vandijklab.bsky.social
We'll formally test prompt robustness in future work, but from experience with earlier Cell2Sentence models, we've found minimal performance loss when using new or varied prompts. In general, we always train on a wide variety of prompts to avoid overfitting.
vandijklab.bsky.social
- For dataset interpretation, we evaluate on scRNA-seq studies published after the model was pretrained.
Performance drops in these settings let us estimate generalization gaps, but we're also interested in developing confidence measures in future work.
vandijklab.bsky.social
This is still an open challenge - we don't yet have confidence estimation built into the model, but we do evaluate C2S-Scale in out-of-distribution regimes. For example:
- In perturbation prediction, we test on unseen cell type–drug combinations and combinatorial perturbations.
vandijklab.bsky.social
So performance likely reflects both mechanistic pattern recognition and domain transfer from literature and metadata. Our training corpus was intentionally multimodal to support this integration, letting the model ground textual knowledge in expression-level representations.
vandijklab.bsky.social
Great question, it might be a combination of both. For tasks like scQA, the model must (i) interpret gene expression patterns from cell sentences (e.g., identify marker genes or activation signatures), and (ii) relate those to biological concepts learned from the textual domain.
vandijklab.bsky.social
Many downstream tasks (e.g. scQA) require the model to reason jointly over cell sentences and biological text/metadata. We also explored this in our spatial reasoning ablation studies, where interleaving training with gene interaction data improved accuracy over training with expression alone.
vandijklab.bsky.social
C2S-Scale interleaves gene expression (as "cell sentences") with biological text during training to enable reasoning across both modalities. This multimodal integration is a key difference from expression-only models and is important for complex tasks.
vandijklab.bsky.social
We thank our amazing team at Yale, Google Research, and Google DeepMind
vandijklab.bsky.social
What's next for C2S-Scale?
• True Multimodality: Integrating proteomics, epigenomics, imaging data 🖼️
• Deeper Biology: Modeling cell interactions, dynamics, & development ⏳
• Enhanced Trust: Improving interpretability & reliability ✅
• Community Tools: Building shared benchmarks & platforms 🏆
vandijklab.bsky.social
Let's build together! 🛠️ We're open-sourcing C2S-Scale to empower the community.
Models up to 1B parameters are already available on HF, and models up to 27B parameters will be released in the next few weeks!
huggingface.co/collections/... github.com/vandijklab/c...
Cell2Sentence Models - a vandijklab Collection
Cell2Sentence models trained for single-cell tasks
huggingface.co
vandijklab.bsky.social
Beyond standard training, we used Reinforcement Learning (RL) 🤖 to fine-tune C2S-Scale.
Using GRPO + biological rewards, we specifically improved:
• Perturbation prediction accuracy 🧪
• Biological Q&A relevance ❓
Aligning LLMs with biological goals! ✅
vandijklab.bsky.social
Size matters! 📈 We observed clear scaling laws: As model size increased from 410M → 27 Billion parameters, performance consistently improved across tasks.
This confirms that LLMs learn better biological representations at scale using the C2S approach. Even works with efficient LoRA tuning! 💪