Former Bard College prof
For my after-work alter-ego, see @elstersen.bsky.social
Support Ukraine! 🇺🇦
import matplotlib.pyplot as plt
f = lambda x,k,a: (x if k==0 else f(a*x*(1-x), k-1, a))
y = [f(.21, 17 + i % 19, 2.5+1.5*i/10000) for i in range(10000)]
plt.plot(y,'.',markersize=1);
www.dwarkesh.com/p/ilya-sutsk...
www.dwarkesh.com/p/ilya-sutsk...
We love to trash dominant hypothesis, but we need to look for evidence against the manifold hypothesis elsewhere:
This elegant work doesn't show neural dynamics are high D, nor that we should stop using PCA
It’s quite the opposite!
(thread)
And what can we do to make this place better?
And what can we do to make this place better?
Easily for me to say, I know, but I still think atrproto has a chance.
Easily for me to say, I know, but I still think atrproto has a chance.
- find one
- found one
- fund one
Unfortunately (3) is not an option for me, (2) is unlikely, so I'm kinda stuck with (1)
- find one
- found one
- fund one
Unfortunately (3) is not an option for me, (2) is unlikely, so I'm kinda stuck with (1)
en.wikipedia.org/wiki/Opaque_...
en.wikipedia.org/wiki/Opaque_...
This is Nano Banana, the "ultra-realistic" image generation model, responding to that prompt
This is Nano Banana, the "ultra-realistic" image generation model, responding to that prompt
arxiv.org/pdf/2410.03972
It started from a question I kept running into:
When do RNNs trained on the same task converge/diverge in their solutions?
🧵⬇️