Nikhil Soni
nikhilsoni.bsky.social
Nikhil Soni
@nikhilsoni.bsky.social
ML @ Apple. Focused on Simulation, LLM agents and Cognitive architectures.
Great presentation by Ilya Sutskever yesterday at the #NeurIPS2024 test of time awards! Loved his format of reusing slides from 2014 and talking about what they got right and wrong. He ended with some strong claims about pre-training and predictions about what comes next.
December 14, 2024 at 12:48 PM
Reposted by Nikhil Soni
Hi NeurIPS!

Explore ~4,500 NeurIPS papers in this interactive visualization:

jalammar.github.io/assets/neuri...
(Click on a point to see the paper on the website)

Uses @cohere.com models and @lelandmcinnes.bsky.social's datamapplot/umap to help make sense of the overwhelming scale of NeurIPS.
December 10, 2024 at 10:49 PM
I am in Vancouver for #NeurIPS2024!

At present, my most pressing interests lie in the domains of:
- Social modeling utilizing Multi Agent Systems within simulation
- World simulators

I would welcome the opportunity to engage in a discussion on any of these topics (or any other related subjects).
December 10, 2024 at 3:46 AM
Reposted by Nikhil Soni
PydanticAI is here!

An Agent Framework designed for production, from the team who created and maintain @pydantic.bsky.social.

As some of you will know, I've been working on this for some time, can't wait to see what people build with it.

ai.pydantic.dev
Introduction
Agent Framework / shim to use Pydantic with LLMs
ai.pydantic.dev
December 2, 2024 at 11:05 AM
Reposted by Nikhil Soni
ICLR is a top ML conference. All 10k+ papers from 2025 are in Open Review.

The top rated papers include:
— Scaling LLM interpretability to GPT4 scale
— Changing light source w consistent image
— 100x faster diffusion models
— A provable theory for LLM jailbreaks

Thread...
November 29, 2024 at 3:16 PM
Reposted by Nikhil Soni
Adding my love letter to

arxiv.org/pdf/2304.01315

Empirical Design in Reinforcement Learning
by
Andrew Patterson, Samuel Neumann, Martha White, Adam White

JMLR 25 (2024) 1-63
#ReinforcementLearning

These aren’t the heroes we deserve, but they are the heroes we need.
arxiv.org
November 23, 2024 at 1:40 PM