@simjeg.bsky.social
230 followers 36 following 16 posts
Senior LLM Technologist @NVIDIA Views and opinions are my own
Posts Media Videos Starter Packs
simjeg.bsky.social
🎲 Did you know Yahtzee can be solved optimally in less than 100 lines of Python and under 5min with 2 vCPU?

I built a @gradio-hf.bsky.social app so you can try it yourself: huggingface.co/spaces/simon...

Implementation is based on the excellent paper "An Optimal Strategy for Yahtzee" (Glenn, 2006)
Optimal Yahtzee - a Hugging Face Space by simonjegou
Discover amazing ML apps made by the community
huggingface.co
simjeg.bsky.social
Fresh news from kvpress, our open source library for KV cache compression 🔥

1. We published a blog post with
@huggingface

2. We published a Space for you to try it
3. Following feedback from the research community, we added a bunch of presses and benchmarks

Links👇(1/2)
simjeg.bsky.social
How do you find the permutation of words that minimize their perplexity as measured by an LLM ? In this year Kaggle Santa competition, I shared an approach to move to a continuous space where you can use gradient-descent using REINFORCE: www.kaggle.com/code/simjeg/...
Relax, it's Santa
Explore and run machine learning code with Kaggle Notebooks | Using data from multiple data sources
www.kaggle.com
simjeg.bsky.social
💡 We've just released KV cache quantization in kvpress, our open source package for KV cache compression. Check it out : github.com/NVIDIA/kvpress.

Special thanks for Arthur Zucker and Marc Sun from @huggingface.bsky.social for their support 🤗
simjeg.bsky.social
nice work ! Identifying patterns could be done on the fly:
bsky.app/profile/simj...
simjeg.bsky.social
Hidden states in LLM ~ follow normal distributions. Consequently, both queries and keys also follow a normal distribution and if you replace all queries and keys by their average counterpart, this magically explains the slash pattern observed in attention matrices
Reposted
simjeg.bsky.social
🚀 Excited to announce KVPress — our open-source library for efficient LLM KV cache compression!
👉 Check it out (and drop a ⭐): github.com/NVIDIA/kvpress
🔗 Full details in the thread 🧵 (1/4)
simjeg.bsky.social
of course it's different ! transformer is an MLP predicting the parameters of another MLP 😀
simjeg.bsky.social
You can reproduce this plot using this colab notebook: colab.research.google.com/drive/1DbAEm.... We used this property to create a new KV cache compression called Expected Attention in our kvpress repository:
Google Colab
colab.research.google.com
simjeg.bsky.social
Hidden states in LLM ~ follow normal distributions. Consequently, both queries and keys also follow a normal distribution and if you replace all queries and keys by their average counterpart, this magically explains the slash pattern observed in attention matrices
simjeg.bsky.social
I created a DistillationPress that distills the (K,V) cache into a compressed (Kc,Vc) cache by minimizing ||A(q,K,V) - A(q,Kc,Vc)||^2. Checkout my notebook here: github.com/NVIDIA/kvpre.... More work needs to be done, it's just a first step (3/3)
simjeg.bsky.social
KV cache quantization ? KV cache pruning ? KV cache approximation ? Replace "KV cache" by "MLP" and you'll see most of the research has already been explored🤯 So I gave it a try in within our new kvpress repo 👇 (2/3)
simjeg.bsky.social
Ever noticed that the attention mechanism in transformers is essentially a two-layer MLP? 🤔
A(q, K, V) = V @ softmax(K / √d @ q)
Weights: K / √d and V
nonlinearity: softmax
💡This offers fresh insights into KV cache compression research 🧵(1/3)
simjeg.bsky.social
This release also introduces a new method we developed: Expected Attention! 🎯 By leveraging the normal distribution of LLM hidden states, it measures the importance of each key-value pair. Learn more in this notebook: github.com/NVIDIA/kvpre... (4/4)
kvpress/notebooks/expected_attention.ipynb at main · NVIDIA/kvpress
LLM KV cache compression made easy. Contribute to NVIDIA/kvpress development by creating an account on GitHub.
github.com
simjeg.bsky.social
kvpress aims at helping researchers and developers to create and benchmark KV cache compression techniques offering a user-friendly repo built on 🤗 Transformers. All implemented methods are training free and model agnostic (3/4)
simjeg.bsky.social
Long-context LLMs are resource-heavy due to KV cache growth: e.g., 1M tokens for Llama 3.1-70B (float16) needs 330GB of memory 😬. This challenge has driven intense research into KV cache compression, with many submissions to #ICLR2025. (2/4)
simjeg.bsky.social
🚀 Excited to announce KVPress — our open-source library for efficient LLM KV cache compression!
👉 Check it out (and drop a ⭐): github.com/NVIDIA/kvpress
🔗 Full details in the thread 🧵 (1/4)