André Panisson
@panisson.bsky.social
700 followers
470 following
11 posts
Principal Researcher @ CENTAI.eu | Leading the Responsible AI Team. Building Responsible AI through Explainable AI, Fairness, and Transparency. Researching Graph Machine Learning, Data Science, and Complex Systems to understand collective human behavior.
Posts
Media
Videos
Starter Packs
Reposted by André Panisson
Reposted by André Panisson
Sumit
@reachsumit.com
· Dec 4
Explainable and Interpretable Multimodal Large Language Models: A Comprehensive Survey
The rapid development of Artificial Intelligence (AI) has revolutionized numerous fields, with large language models (LLMs) and computer vision (CV) systems driving advancements in natural language un...
arxiv.org
Reposted by André Panisson
Giovanni Petri
@lordgrilo.bsky.social
· Nov 30
Andrea Santoro
@andreasantoro.bsky.social
· Nov 28
Higher-order connectomics of human brain function reveals local topological signatures of task decoding, individual identification, and behavior - Nature Communications
Here, the authors perform a higher-order analysis of fMRI data, revealing that accounting for group interactions greatly enhances task decoding, brain fingerprinting, and brain-behavior associations c...
www.nature.com
André Panisson
@panisson.bsky.social
· Nov 30
Do I Know This Entity? Knowledge Awareness and Hallucinations in Language Models
Hallucinations in large language models are a widespread problem, yet the mechanisms behind whether models will hallucinate are poorly understood, limiting our ability to solve this problem. Using spa...
arxiv.org
André Panisson
@panisson.bsky.social
· Nov 30
Scaling and evaluating sparse autoencoders
Sparse autoencoders provide a promising unsupervised approach for extracting interpretable features from a language model by reconstructing activations from a sparse bottleneck layer. Since language m...
arxiv.org
Reposted by André Panisson
Reposted by André Panisson
Reposted by André Panisson
André Panisson
@panisson.bsky.social
· Nov 19
André Panisson
@panisson.bsky.social
· Nov 17
Reposted by André Panisson
André Panisson
@panisson.bsky.social
· Nov 16
André Panisson
@panisson.bsky.social
· Nov 16
Do I Know This Entity? Knowledge Awareness and Hallucinations in...
Hallucinations in large language models are a widespread problem, yet the mechanisms behind whether models will hallucinate are poorly understood, limiting our ability to solve this problem. Using...
openreview.net
André Panisson
@panisson.bsky.social
· Nov 16
André Panisson
@panisson.bsky.social
· Nov 16