David Debot
@daviddebot.bsky.social
180 followers 240 following 16 posts
PhD student @dtai-kuleuven.bsky.social in neurosymbolic AI and concept-based learning https://daviddebot.github.io/
Posts Media Videos Starter Packs
Reposted by David Debot
lennertds.bsky.social
Just under 10 days left to submit your latest endeavours in #tractable probabilistic models!

Join us at TPM @auai.org #UAI2025 and show how to build #neurosymbolic / #probabilistic AI that is both fast and trustworthy!
nolovedeeplearning.bsky.social
the #TPM ⚡Tractable Probabilistic Modeling ⚡Workshop is back at @auai.org #UAI2025!

Submit your works on:

- fast and #reliable inference
- #circuits and #tensor #networks
- normalizing #flows
- scaling #NeSy #AI
...& more!

🕓 deadline: 23/05/25
👉 tractable-probabilistic-modeling.github.io/tpm2025/
Reposted by David Debot
jjcmoon.bsky.social
We developed a library to make logical reasoning embarrasingly parallel on the GPU.

For those at ICLR 🇸🇬: you can get the juicy details tomorrow (poster #414 at 15:00). Hope to see you there!
Reposted by David Debot
gabventurato.bsky.social
If you're at #AAAI2025, come check out our demo on neurosymbolic reinforcement learning with probabilistic logic shields 🤖 Tomorrow (Sat, March 1) from 12:30–2:30 PM during the poster session 💻
daviddebot.bsky.social
🚀 Do you care about safe AI? Do you want RL agents that are both smart & trustworthy?

At #AAAI2025, we present our demo for neurosymbolic RL—combining deep learning with probabilistic logic shields for safer, interpretable AI in complex environments. 🏰🔥
🧵👇
(1/8)
Reposted by David Debot
jjcmoon.bsky.social
We all know backpropagation can calculate gradients, but it can do much more than that!

Come to my #AAAI2025 oral tomorrow (11:45, Room 119B) to learn more.
Reposted by David Debot
gabventurato.bsky.social
🔥 Can AI reason over time while following logical rules in relational domains? We will present Relational Neurosymbolic Markov Models (NeSy-MMs) next week at #AAAI2025! 🎉

📜 Paper: arxiv.org/pdf/2412.13023
💻 Code: github.com/ML-KULeuven/...

🧵⬇️
daviddebot.bsky.social
Open-source & easy to use!
🔷 Code: github.com/ML-KULeuven/...
🔷 Based on MiniHack & Stable Baselines3
🔷 Define new shields in just a few lines of code!

🚀 Let’s make RL safer & smarter, together!
(7/8)
daviddebot.bsky.social
Want to try it yourself? 🎮

Use our interactive web demo!
🔷 Modify environments (add lava, monsters!)
🔷 Test shielded vs. non-shielded agents

🖥️ Play with it here: dtai.cs.kuleuven.be/projects/nes...
(6/8)
daviddebot.bsky.social
Why does this matter?
🔷 Faster training ⌛
🔷 Safer exploration 🔒
🔷 Better generalization 🌍
(5/8)
daviddebot.bsky.social
How does it work? 🤔🛡️

The shield:
✅ Exploits symbolic data from sensors 🌍
✅ Uses logical rules 📜
✅ Prevents unsafe actions 🚫
✅ Still allows flexible learning 🤖

A perfect blend of symbolic reasoning & deep learning!
(4/8)
daviddebot.bsky.social
Enter MiniHack, our demo's testing ground! 🏰🗡️

There, RL agents face:
✅ Lava cliffs & slippery floors
✅ Chasing monsters
✅ Locked doors needing keys

Findings:
🔷 Standard RL struggles to find an optimal, safe policy.
🔷 Shielded RL agents stay safe & learn faster!
(3/8)
daviddebot.bsky.social
Deep RL is powerful, but...
⚠️ It can take dangerous actions
⚠️ It lacks safety guarantees
⚠️ It struggles with real-world constraints

Yang et al.'s probabilistic logic shields fix this, enforcing safety without breaking learning efficiency! 🚀
(2/8)
daviddebot.bsky.social
🚀 Do you care about safe AI? Do you want RL agents that are both smart & trustworthy?

At #AAAI2025, we present our demo for neurosymbolic RL—combining deep learning with probabilistic logic shields for safer, interpretable AI in complex environments. 🏰🔥
🧵👇
(1/8)
daviddebot.bsky.social
Or check out our Medium post: 👉 medium.com/@pyc.devteam... (7/7)
daviddebot.bsky.social
With CMR, we’re reaching the sweet spot of accuracy and interpretability. Check it out at our poster at #NeurIPS2024! 👉 neurips.cc/virtual/2024... (6/7)
NeurIPS Poster Interpretable Concept-Based Memory ReasoningNeurIPS 2024
neurips.cc
daviddebot.bsky.social
During training, CMR learns embeddings as latent representations of logic rules, and a neural rule selector identifies the most relevant rule for each instance. Due to a clever factorization and rule selector, inference is linear in the number of concepts and rules. (5/7)
daviddebot.bsky.social
CMR makes a prediction in 3 steps:
1) Predict concepts from the input
2) Neurally select a rule from a memory of learned logic rules ➨ Accuracy
3) Evaluate the selected rule with the concepts to make a final prediction ➨ Interpretability (4/7)
daviddebot.bsky.social
CMR has:
⚡ State-of-the-art accuracy that rivals black-box models
🚀 Pure probabilistic semantics with linear-time exact inference
👁️ Transparent decision-making so human users can interpret model behavior
🛡️ Pre-deployment verifiability of model properties (3/7)
daviddebot.bsky.social
CMR is our latest neurosymbolic concept-based model. A proven 𝘶𝘯𝘪𝘷𝘦𝘳𝘴𝘢𝘭 𝘣𝘪𝘯𝘢𝘳𝘺 𝘤𝘭𝘢𝘴𝘴𝘪𝘧𝘪𝘦𝘳 irrespective of the concept set, CMR achieves near-black-box accuracy by combining 𝗿𝘂𝗹𝗲 𝗹𝗲𝗮𝗿𝗻𝗶𝗻𝗴 and 𝗻𝗲𝘂𝗿𝗮𝗹 𝗿𝘂𝗹𝗲 𝘀𝗲𝗹𝗲𝗰𝘁𝗶𝗼𝗻! (2/7)
daviddebot.bsky.social
🚨 Interpretable AI often means sacrificing accuracy—but what if we could have both? Most interpretable AI models, like Concept Bottleneck Models, force us to trade accuracy for interpretability.

But not anymore, due to Concept-Based Memory Reasoner (CMR)! #NeurIPS2024 (1/7)