Paul Soulos
@paulsoulos.bsky.social
1.1K followers 380 following 21 posts
Computational Cognitive Science @JhuCogsci researching neurosymbolic methods. Previously wearable engineering @fitbit and @Google.
Posts Media Videos Starter Packs
Pinned
paulsoulos.bsky.social
🚨 Thrilled to share that Compositional Generalization Across Distributional Shifts with Sparse Tree Operations received a spotlight award at #NeurIPS2024! 🌟 I'll present a poster on Tuesday and give an invited lightning talk at the System 2 Reasoning Workshop on Sunday. 🧵👇
paulsoulos.bsky.social
While both robotics and LM can be cast as next-token prediction, the token distribution for computer agents seems more like abstract motor programs (robotics) vs. language. This puts computer use on the trajectory of robotics which is slower than LLMs. 2/2
paulsoulos.bsky.social
Intriguing prediction from
Trenton Bricken & @sholto-douglas.bsky.social on @dwarkesh.bsky.social's podcast: computer use agents "solved" in ~10 months 🖱️⌨️. This feels highly optimistic. I think that computer use is closer to robotics than language modeling. 1/2
paulsoulos.bsky.social
I’m presenting this work at 11a PT today in East Exhibit Hall at poster #4009. Come by and chat!
paulsoulos.bsky.social
🚨 Thrilled to share that Compositional Generalization Across Distributional Shifts with Sparse Tree Operations received a spotlight award at #NeurIPS2024! 🌟 I'll present a poster on Tuesday and give an invited lightning talk at the System 2 Reasoning Workshop on Sunday. 🧵👇
paulsoulos.bsky.social
📅 You can find me at the following presentations:

- Poster Session 1 East #4009 on Wednesday, December 11, from 11a-2p PST.
- System 2 Reasoning Workshop Spotlight Oral Talk on Sunday, December 15, from 9:30-10a PST.
- System 2 Reasoning Workshop poster sessions on Sunday, December 15.
paulsoulos.bsky.social
📈DTM and sDTM operate on trees, and we introduce a very simple and dataset independent method to embed sequence inputs and outputs as trees. Across a variety of datasets and test time distributional shifts, sDTM outperforms fully neural and hybrid neurosymbolic models.
paulsoulos.bsky.social
🌳We introduce the Sparse Differentiable Tree Machine (sDTM), an extension of (DTM) that introduces a new way to represent trees in vector space. Sparse Coordinate Trees (SCT) reduce the parameter count and memory usage over the previous DTM by an order of magnitude and lead to a 30x speedup!
paulsoulos.bsky.social
Our previous work introducing the Differentiable Tree Machine (DTM) is an example of a unified neurosymbolic system where trees are represented and operated over in vector space.
paulsoulos.bsky.social
Hybrid systems use neural networks to parameterize symbolic components and can struggle with the same pitfalls as fully symbolic systems. In Unified Neurosymbolic systems, operations can simultaneously be viewed as either neural or symbolic, and this provides a fully neural path through the network.
paulsoulos.bsky.social
🧠 Neural networks struggle with compositionality, and symbolic methods struggle with flexibility and scalability. Neurosymbolic methods promise to combine the benefits of both methods, but there is a distinction between *hybrid* neurosymbolic methods and *unified* neurosymbolic methods.
paulsoulos.bsky.social
🚨 Thrilled to share that Compositional Generalization Across Distributional Shifts with Sparse Tree Operations received a spotlight award at #NeurIPS2024! 🌟 I'll present a poster on Tuesday and give an invited lightning talk at the System 2 Reasoning Workshop on Sunday. 🧵👇
paulsoulos.bsky.social
Applied AGI scientist is a wild job title considering people have no idea how to even define AGI let alone what we should apply to create it.
Reposted by Paul Soulos
xuanalogue.bsky.social
Okay the people requested one so here is an attempt at a Computational Cognitive Science starter pack -- with apologies to everyone I've missed! LMK if there's anyone I should add!

go.bsky.app/KDTg6pv
paulsoulos.bsky.social
Researchers are split on HOW to achieve compositional behavior. Some propose data interventions, others argue we need entirely new model architectures, and some suggest we need to integrate symbolic paradigms.
paulsoulos.bsky.social
Key finding: ~75% of researchers agree that CURRENT neural models do NOT demonstrate true compositional behavior. Scale alone won't solve this - we need fundamental breakthroughs.
paulsoulos.bsky.social
We surveyed 79 top AI researchers about compositional behavior. Our goal? Map out the field's consensus and disagreements on how neural models process language to illuminate promising paths forward. Inspired by Dennett’s logical geography, we cluster participants by responses 🗺️
paulsoulos.bsky.social
Compositionality is fundamental to language: the ability to understand complex expressions by combining simpler parts. But do current AI models REALLY understand this? Spoiler: Most researchers say NO.
paulsoulos.bsky.social
I’m excited to share our survey investigating the current challenges and debates around achieving compositional behavior (CB) in language models, to be presented at #EMNLP2024! What makes language understanding truly intelligent? A thread unpacking our latest research 🤖📊🧵
paulsoulos.bsky.social
Besides being ergonomically beneficial, a split keyboard can prevent this from happening!