Alessio Ragno
Associate Professor (Enseignant-Chercheur) at EPITA
I conduct research in Explainable Artificial Intelligence and Scientific Discovery at EPITA's Research Laboratory (LRE). My work focuses on developing self-explainable models, with a particular emphasis on Graph Neural Networks, Drug Discovery, and Reinforcement Learning. Beyond interpretability for its own sake, I'm passionate about leveraging XAI as a tool to guide and accelerate scientific discovery turning model explanations into actionable insights for real-world applications.
Download CV
Featured Publications
Prototype-based Interpretable Graph Neural Networks
IEEE Transactions on Artificial Intelligence, 2022
Latest News
20 Jan 2026 · Singapore
CIP-Net: Continual Interpretable Prototype-based Network @ AAAI 2026
Our latest AAAI paper “CIP-Net: Continual Interpretable Prototype-based Network” is officially out! In this paper, we introduce CIP-Net, a novel prototype-based continual learning model that leverages eXplainable AI (XAI) techniques to enhance interpretability and mitigate catastrophic forgetting.
Read more →
1 Dec 2025 · San Diego, USA
On Logic-based Self-Explainable Graph Neural Networks @ NeurIPS 2025
Our latest NeurIPS paper “On Logic-based Self-Explainable Graph Neural Networks” is officially out! In this paper, we introduce LogiX-GIN, a novel self-explainable graph neural network layer that can be directly converted into logic rules.
Read more →
24 Oct 2025 · Wiley Interdisciplinary Reviews
XAI–Guided Continual Learning: Rationale, Methods, and Future Directions
I'm excited to share our latest review article published in Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, co-authored with Michela Proietti and Roberto Capobianco.
Read more →