Kerry Champion
@causalai.bsky.social
Investigating Causal AI Commercialization
Human abstraction ability applies not just to language but across all of the subjects we reason about.
AI won’t reach its potential till we learn to blend symbolic and causal capabilities with the statistical pattern matching that powers today’s LLMs.
#AI #NeurosymbolicAI #CausalAI
AI won’t reach its potential till we learn to blend symbolic and causal capabilities with the statistical pattern matching that powers today’s LLMs.
#AI #NeurosymbolicAI #CausalAI
June 22, 2025 at 10:41 PM
Human abstraction ability applies not just to language but across all of the subjects we reason about.
AI won’t reach its potential till we learn to blend symbolic and causal capabilities with the statistical pattern matching that powers today’s LLMs.
#AI #NeurosymbolicAI #CausalAI
AI won’t reach its potential till we learn to blend symbolic and causal capabilities with the statistical pattern matching that powers today’s LLMs.
#AI #NeurosymbolicAI #CausalAI
Instead of relying only on patterns in input, humans:
+ Form internal, rule-based models of language structure (e.g. grammar, syntax).
+ Infer underlying rules even when they’re not explicitly taught.
+ Use these abstractions to generalize beyond what they’ve directly heard.
2/n
+ Form internal, rule-based models of language structure (e.g. grammar, syntax).
+ Infer underlying rules even when they’re not explicitly taught.
+ Use these abstractions to generalize beyond what they’ve directly heard.
2/n
June 22, 2025 at 10:40 PM
Instead of relying only on patterns in input, humans:
+ Form internal, rule-based models of language structure (e.g. grammar, syntax).
+ Infer underlying rules even when they’re not explicitly taught.
+ Use these abstractions to generalize beyond what they’ve directly heard.
2/n
+ Form internal, rule-based models of language structure (e.g. grammar, syntax).
+ Infer underlying rules even when they’re not explicitly taught.
+ Use these abstractions to generalize beyond what they’ve directly heard.
2/n
Enjoying the graphic.
On your list of ways to address these concerns, where would you put implementation neurosymbolic AI?
Seems to me that combining deep learning (LLMs) with symbolic/causal models could go a long way to creating more reliable, auditable, and aligned AI.
#AI
On your list of ways to address these concerns, where would you put implementation neurosymbolic AI?
Seems to me that combining deep learning (LLMs) with symbolic/causal models could go a long way to creating more reliable, auditable, and aligned AI.
#AI
June 19, 2025 at 3:12 PM
Enjoying the graphic.
On your list of ways to address these concerns, where would you put implementation neurosymbolic AI?
Seems to me that combining deep learning (LLMs) with symbolic/causal models could go a long way to creating more reliable, auditable, and aligned AI.
#AI
On your list of ways to address these concerns, where would you put implementation neurosymbolic AI?
Seems to me that combining deep learning (LLMs) with symbolic/causal models could go a long way to creating more reliable, auditable, and aligned AI.
#AI
For more from Cassie @decisionleader.bsky.social :
decision.substack.com/p/agentic-ai...
decision.substack.com/p/agentic-ai...
Agentic AI: Be Careful What You Wish For 🧞
Wishes have consequences. Especially when they run in production.
decision.substack.com
June 18, 2025 at 7:50 PM
For more from Cassie @decisionleader.bsky.social :
decision.substack.com/p/agentic-ai...
decision.substack.com/p/agentic-ai...
The opportunity here is for us to perfect hybrid systems that integrate deep learning with symbolic reasoning and causal understanding. This will reduce our dependence on filtering bad consequences, by having models that are inherently more reliable.
#AI #CausalAI #SymbolicAI
#AI #CausalAI #SymbolicAI
June 18, 2025 at 7:49 PM
The opportunity here is for us to perfect hybrid systems that integrate deep learning with symbolic reasoning and causal understanding. This will reduce our dependence on filtering bad consequences, by having models that are inherently more reliable.
#AI #CausalAI #SymbolicAI
#AI #CausalAI #SymbolicAI
Progress on the "control layer" feels far behind our breakthroughs with the "genie".
Having the control layer be a smart filter on the input and output is helpful but in the end seems fundamentally wrong headed.
2/n
Having the control layer be a smart filter on the input and output is helpful but in the end seems fundamentally wrong headed.
2/n
June 18, 2025 at 7:48 PM
Progress on the "control layer" feels far behind our breakthroughs with the "genie".
Having the control layer be a smart filter on the input and output is helpful but in the end seems fundamentally wrong headed.
2/n
Having the control layer be a smart filter on the input and output is helpful but in the end seems fundamentally wrong headed.
2/n
Reposted by Kerry Champion
First, through a think-aloud study (N=16) in which participants use ChatGPT to answer objective questions, we identify 3 features of LLM responses that shape users' reliance: #explanations (supporting details for answers), #inconsistencies in explanations, and #sources.
2/7
2/7
February 28, 2025 at 3:21 PM
First, through a think-aloud study (N=16) in which participants use ChatGPT to answer objective questions, we identify 3 features of LLM responses that shape users' reliance: #explanations (supporting details for answers), #inconsistencies in explanations, and #sources.
2/7
2/7
@rohanpaul.bsky.social this post is feeling lonely ;-)
Why not cross post on both X and Bluesky?
Why not cross post on both X and Bluesky?
June 18, 2025 at 12:14 AM
@rohanpaul.bsky.social this post is feeling lonely ;-)
Why not cross post on both X and Bluesky?
Why not cross post on both X and Bluesky?
For example, Apple’s approach to callbacks to app code from the model as it reasons and their support for multi-layered guardrails illustrates what they have learned about needing components with use case specific checks and balances.
#PervasiveAI #AgenticAI #AppleAI #AISafety
2/n
#PervasiveAI #AgenticAI #AppleAI #AISafety
2/n
June 17, 2025 at 10:22 PM
For example, Apple’s approach to callbacks to app code from the model as it reasons and their support for multi-layered guardrails illustrates what they have learned about needing components with use case specific checks and balances.
#PervasiveAI #AgenticAI #AppleAI #AISafety
2/n
#PervasiveAI #AgenticAI #AppleAI #AISafety
2/n