dribnet
@drib.net
910 followers 140 following 48 posts
creations with code and networks
Posts Media Videos Starter Packs
Reposted by dribnet
arnicas.bsky.social
Interesting work from @drib.net exploring and clustering image concepts with Gemma Scope got.drib.net/latents/
Reposted by dribnet
cvprconference.bsky.social
AI Art Winner – Tom White
Congratulations to @dribnet.bsky.social for winning a #CVPR2025 AI Art Award for "Atlas of Perception.” See it and other works in the CVPR AI Art Gallery in Hall A1 and online. thecvf-art.com @elluba.bsky.social
Reposted by dribnet
elluba.bsky.social
A little preview of our @cvprconference.bsky.social AI art gallery on @lerandomart.bsky.social 👀

We will premiere @drib.net 's crazy flower windmill sculpture - what an honour 🥰🌻

Read my interview with three of the gallery artists: bit.ly/3SKDaNL

#CVPR2025 #creativeAI

@monkantony.bsky.social
drib.net
dribnet @drib.net · Feb 5
Data browser below (beware: the public text-to-prompt dataset may include questionable content). Calling this "DEI" is certainly a misnomer, but with SAE latents there's likely no word that exactly fits this "category" which is discovered by only unsupervised training. got.drib.net/maxacts/dei/
Maximum Activations: DEI
Gemma-2-2B: DEI
got.drib.net
drib.net
dribnet @drib.net · Feb 5
Finally I run a large multi-diffusion process placing each prompt where it landed in the umap cluster with a size proportional to the original cossim score - then composite that with the edge graph and overlay the circle. Here's a heatmap of where elements land alongside the completed version.
drib.net
dribnet @drib.net · Feb 5
I also pre-process the 250 prompts to which words within the prompts have high activations. These are normalized and the text is updated - here shown with {{brackets}}. This will trigger a downstream LoRA and influence coloring to highlight the relevant semantic elements (still very much a WIP).
drib.net
dribnet @drib.net · Feb 5
Next step is to cluster those top 250 prompts using this embedding representation. I use a customized umap which constrains the layout based on the cossim scores - the long tail extremes go in the center. This is consistent with mech-interp practice of focusing on the maximum activations.
drib.net
dribnet @drib.net · Feb 5
For now I'm using a dataset of 600k text-to-image prompts as my data source (mean pooled embedding vector). The SAE latent is converted to an LLM vector and cossim across all 600k prompts examined. This gaussian is perfect; zooming in on the right - we'll be skimming of the top 250 shown in red
drib.net
dribnet @drib.net · Feb 5
The first step of course is to find an interesting direction in LLM latent space. In this case, I came across a report of a DEI SAE latent in Gemma2-2b. neuronpedia confirms this latent centers on "topics related to race, ethnicity, and social rights issues" www.neuronpedia.org/gemma-2-2b/2...
www.neuronpedia.org
drib.net
dribnet @drib.net · Feb 5
Gemma2 2B: DEI Vector
let's look at some of the data pipeline for this 🧵
drib.net
dribnet @drib.net · Feb 3
The refusal vector is one of the strongest recent mechanistic interpretability results and it could be interesting to investigate further how it differs based on model size, architecture, training, etc.
Interactive Explorer below (warning: some disturbing content).
got.drib.net/maxacts/refu...
Maximum Activations: Refusal
Gemma-2-2B-IT: Refusal in Language Models
got.drib.net
drib.net
dribnet @drib.net · Feb 3
Using their publicly released Gemma-2 refusal vector, this finds 100 contexts that trigger a refusal response. Predictably includes violent topics, but often strong reactions are elicited by mixing harmful and innocuous subjects such as "a Lego set Meth Lab" or "Ronald McDonald wielding a firearm"
drib.net
dribnet @drib.net · Feb 3
Training LLMs includes teaching them to sometimes respond "I'm sorry, but I can't answer that". AI research calls this "refusal" and it is one of many separable proto-concepts in these systems. This Arditi et al paper investigates refusal and is the basis for this work arxiv.org/abs/2406.11717
Refusal in Language Models Is Mediated by a Single Direction
Conversational large language models are fine-tuned for both instruction-following and safety, resulting in models that obey benign requests but refuse harmful ones. While this refusal behavior is wid...
arxiv.org
drib.net
dribnet @drib.net · Feb 3
Gemma-2 9B latent visualization: Refusal
(screen print version)
drib.net
dribnet @drib.net · Feb 1
Seems like a broader set of triggers for this one; I saw hammer & sickle, Karl Marx, cultural revolution - but also soviet military, worker rights, raised fists, and even Bernie Sanders. Highly activating tokens are shown in {curly braces} - such as this incidental combination of red with {hammer}.
drib.net
dribnet @drib.net · Feb 1
Browser below. Didn't elicit the usual long-tail exemplars so visually flatter as center scaling is missing. One gut theory on why is that the model (and SAE) are multilingual and so latent might only strongly trigger with references in Chinese, which this dataset lacks. got.drib.net/maxacts/ccp/
Maximum Activations: American
DeepSeek-R1-Distill-Llama-8B: Steering with AME(R1)CA: CCP_FEATURE
got.drib.net
drib.net
dribnet @drib.net · Feb 1
This is the flipside to yesterday's DeepSeek based from the same source: Tyler Cosgrove's AME(R1)CA proof of concept which adjusts R1 responses *away* from CCP_FEATURE and *toward* the AMERICA_FEATURE github.com/tylercosgrov...
GitHub - tylercosgrove/ame-r1-ca: Use a sparse autoencoder to steer R1 towards American values.
Use a sparse autoencoder to steer R1 towards American values. - tylercosgrove/ame-r1-ca
github.com
drib.net
dribnet @drib.net · Feb 1
DeepSeek R1 latent visualization: AME(R1)CA (CCP_FEATURE)
drib.net
dribnet @drib.net · Jan 31
embrace the slop 🫅
drib.net
dribnet @drib.net · Jan 31
cranked up "insane details" a notch or two for this one 😁
bsky.app/profile/drib...
drib.net
dribnet @drib.net · Jan 31
DeepSeek R1 latent visualization: AME(R1)CA (AMERICAN_FEATURE)
drib.net
dribnet @drib.net · Jan 31
lol - definitely looking forward to speed-running more R1 latents as people find them, especially some more related to the chain-of-thought process. but so far this is the first one I found in the wild.
drib.net
dribnet @drib.net · Jan 31
The interactive explorer is below - latent seems also activated by references like "Stars & Stripes" and flags of other nations such as the "Union Jack". This sort of slippery ontology is common when examining SAE latents closely as they often don't align as expected. got.drib.net/maxacts/amer...
Maximum Activations: American
DeepSeek-R1-Distill-Llama-8B: Steering with AME(R1)CA: AMERICAN_FEATURE
got.drib.net
drib.net
dribnet @drib.net · Jan 31
As before, the visualization shows hundreds of clustered contexts activating this latent, with strongest activations at the center. The red color highlights the semantically relevant parts of the image according to the LLM. In this case, it's often flags or other symbolic objects.
drib.net
dribnet @drib.net · Jan 31
This "AMERICAN_FEATURE" latent is one of 65536 automatically discovered by a Sparse AutoEncoder (SAE) trained by qresearch.ai and now on HuggingFace. This is one of the first attempts of applying Mechanistic Interpretability to newly released DeepSeek R1 LLM models. huggingface.co/qresearch/De...
qresearch/DeepSeek-R1-Distill-Llama-8B-SAE-l19 · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
huggingface.co
drib.net
dribnet @drib.net · Jan 31
uses a DeepSeek R1 latent discovered yesterday (!) by Tyler Cosgrove which can be used for steering r1 "toward american values and away from those pesky chinese communist ones". Code for trying out steering is in his repo here github.com/tylercosgrov...