Ben Hoover
@bhoov.bsky.social
96 followers 76 following 9 posts
PhD student@GA Tech; Research Engineer @IBM Research. Thinking about Associative Memory, Hopfield Networks, and AI.
Posts Media Videos Starter Packs
Pinned
bhoov.bsky.social
Excited to share "Dense Associative Memory through the Lens of Random Features" accepted to #neurips2024🎉

DenseAMs need new weights for each stored pattern–hurting scalability. Kernel methods let us add memories without adding weights!

Distributed memory for DenseAMs, unlocked🔓
Reposted by Ben Hoover
krotov.bsky.social
Now that ICML papers are submitted and we are in the midst of discussions on whether scaling is enough or new architectural/algorithmic ideas are needed, what can be a better time to submit your best work to our workshop on New Frontiers in Associative Memory @iclr-conf.bsky.social?
bhoov.bsky.social
I'm at #NeurIPS2024 ! Come chat with us about random features and DenseAMs, East hall # 3507 today Fri Dec 13 11a-2p!
bhoov.bsky.social
Excited to share "Dense Associative Memory through the Lens of Random Features" accepted to #neurips2024🎉

DenseAMs need new weights for each stored pattern–hurting scalability. Kernel methods let us add memories without adding weights!

Distributed memory for DenseAMs, unlocked🔓
Reposted by Ben Hoover
krvarshney.bsky.social
If you’re headed to NeurIPS 2024, and want to learn about IBM Research Human-Centered Trustworthy AI, there are many many opportunities to do so.

1. Start with the official NeurIPS explorer by @henstr.bsky.social and @benhoover.bsky.social. It is infoviz par excellence. neurips2024.vizhub.ai
Tips
neurips2024.vizhub.ai
Reposted by Ben Hoover
ncoop57.bsky.social
As R&D staff @ answer.ai, I work a lot on boosting productivity with AI. A common theme that always comes up is the combination of human+AI. This combination proved to be powerful in our new project ShellSage, which is an AI terminal buddy that learns and teaches with you. A 🧵
bhoov.bsky.social
When u say AM decision boundaries, do you mean the "ridge" that separates basins of attraction? Not sure I understand the pseudomath
bhoov.bsky.social
Interesting -- when you say inversion, you mean taking the strict inverse of the random projection? Our work is not just a random projection for the purpose of dim reduction, but instead a random mapping to a feature space to approximate the original AM's energy.
Reposted by Ben Hoover
henstr.bsky.social
🎺 Here comes the official 2024 NeurIPS paper browser:
- browse all NeurIPS papers in a visual way
- select clusters of interest and get cluster summary
- ZOOOOM in
- filter by human assigned keywords
- filter by substring (authors, titles)

neurips2024.vizhub.ai

#neurips by IBM Research Cambridge
Overview of paper browser. A cluster for reinforcement learning is selected. Paper Browser: only papers assigned to "physical models - physics" are shown. Paper Browser: Filtered by author "Hoover" and detail is shown Paper Brower: ZOOOOM in
bhoov.bsky.social
There is of course a trade off. DrDAM poorly approximates energy landscapes that are:
1️⃣Far from memories
2️⃣“Spiky” (i.e., low temperature/high beta)

We need more random features Y to reconstruct highly occluded/correlated data!
bhoov.bsky.social
DrDAM can meaningfully approximate the memory retrievals of MrDAM! Shown are reconstructions of occluded imgs from TinyImagenet, retrieved by strictly minimizing the energies of both DrDAM and MrDAM.
bhoov.bsky.social
MrDAM energies can be decomposed into:
1️⃣A similarity func between stored patterns & noisy input
2️⃣A rapidly growing separation func (e.g., exponential)

Together, they reveal kernels (e.g., RBF) that can be approximated via the kernel trick & random features (Rahimi&Recht, 2007)
bhoov.bsky.social
Why say “Distributed”?🤔

In traditional Memory representations of DenseAMs (MrDAM) one row in the weight matrix stores one pattern. In our new Distributed representation (DrDAM) patterns are entangled via superposition, “distributed” across all dims of a featurized memory vector
bhoov.bsky.social
Excited to share "Dense Associative Memory through the Lens of Random Features" accepted to #neurips2024🎉

DenseAMs need new weights for each stored pattern–hurting scalability. Kernel methods let us add memories without adding weights!

Distributed memory for DenseAMs, unlocked🔓