Rik Adriaensen
rik-a3.bsky.social
Rik Adriaensen
@rik-a3.bsky.social
💡 ProbLog4Fairness bridges this gap. It shows how to declaratively specify causes of bias using probabilistic logic in a principled, flexible, and interpretable way. Neurosymbolic extensions allow integrating these assumptions in the training of a classifier, to learn fair models from biased data!
January 22, 2026 at 9:52 AM
🔍 How to integrate fairness assumptions into ML models?
 In algorithmic fairness, many definitions of fairness exist, but they often contradict. Rather than choosing one definition, causal models reason about why bias arises in data. However, practitioners struggle to operationalize these models.
January 22, 2026 at 9:52 AM
🔍 How to integrate fairness assumptions into ML models?
 In algorithmic fairness, many definitions of fairness exist, but they often contradict. Rather than choosing one definition, causal models reason about why bias arises in data. However, practitioners struggle to operationalize these models.
January 22, 2026 at 9:48 AM