Research
Making AI More Interpretable and Trustworthy
Research Focus
My research is primarily focused on developing self-explainable models through explainable artificial intelligence (XAI). I am particularly interested in logic-based approaches that enable neural networks to deliver explanations in the form of interpretable logic rules, bridging the gap between neural learning and symbolic reasoning. I leverage XAI techniques for scientific discovery applications, with a special emphasis on drug design and molecular property prediction. Additionally, I am deeply interested in creating interpretable reinforcement learning agents that can provide transparent explanations for their decision-making processes.
Research Interests
Research Areas
Explainable AI
Developing methods to understand and interpret the decisions made by complex machine learning models. This includes creating visual explanations, attention mechanisms, and rule-based interpretations.
Graph Neural Networks
Advancing GNN architectures for learning on structured data. My research focuses on improving model expressiveness, scalability, and interpretability for knowledge graph tasks.
Neural-Symbolic Integration
Combining the strengths of neural networks and symbolic AI to create hybrid models that leverage the learning capabilities of neural nets with the reasoning and interpretability of symbolic systems.
Trustworthy AI
Ensuring AI systems are fair, robust, and secure. This includes research on adversarial robustness, bias detection, and the development of AI systems that humans can trust and rely upon.
Current Research Projects
Open positions for internships and research collaborations. For more details, please contact me.
Self-Explainable Graph Neural Networks
Developing GNN architectures with built-in interpretability mechanisms. Focus on designing neural layers that naturally produce human-understandable explanations without post-hoc analysis.
Logic-based Explanations
Combining neural networks with symbolic logic to extract interpretable rules from learned models. Research on translating neural decisions into formal logical expressions and knowledge bases.
XAI and Reinforcement Learning
Applying explainability techniques to deep reinforcement learning agents. Understanding policy decisions, value functions, and reward prediction mechanisms through interpretable models.
XAI for Drug Discovery
Applying explainable AI to molecular property prediction and drug design. Interpretable models for understanding molecular representations and predicting bioactivity with transparency.
Active Collaborations
Working with leading researchers and institutions in explainable AI and its applications.
Prof. Marc Plantevit & Prof. Celine Robardet
EPITA & INSA Lyon
XAI and GNNs
Prof. Rino Ragno
Sapienza University of Rome
Drug Discovery
Prof. Roberto Capobianco
Sony AI & Sapienza University of Rome
XAI for RL & Biology
Research Activities & Recognition
Awards and Achievements
- 2026 Seal of Excellence - European Commission Horizon Europe for the project proposal submitted under the Horizon Europe Marie Skłodowska-Curie Actions call.
- 2025 NeurIPS Travel Grant for attending NeurIPS 2025.
- 2024 EurAI Travel Grant for the 27th European Conference on Artificial Intelligence.
- 2024 Sapienza PhD Mobility Grant for the research project "Obtaining topology-based explanations through relevant subgraphs".
- 2022 EurAI Travel Grant for the Joint EurAI Advanced Course on AI, TAILOR Summer School 2022.
- 2022 EuroQSAR Travel Grant for the 23rd European Symposium on Quantitative Structure-Activity Relationship.
- 2022 Sapienza Starting Grant for the research project on "Topology-based Explanations for Neural Networks".
- 2021 Ernesto & Iole De Maggi Scholarship, awarded for academic excellence in Engineering.
Review Activity
- 2025 AAAI 2026, ICLR 2026, Nature Scientific Reports.
- 2024 Computational and Structural Biotechnology Journal, Information Processing & Management.
Conference Organization
- 2024 Workshop in eXplainable AI approaches for deep reinforcement learning. Organizing Committee. AAAI 2024. Vancouver Canada.
- 2021 6th International Conference on Polyamines: Biochemical, Physiological and Clinical Perspectives. Organizing Secretariat. Tivoli, Italy.
Posters, Presentations & Seminars
Invited Seminars
- 2025 On Logic-based Explanations for Graph Neural Networks: From post-hoc approaches to self-explainable neural networks. EPITA Lyon. Lyon, France.
- 2025 Logic-based Explanations for Graph Neural Networks. Sapienza University of Rome. Rome, Italy.
- 2025 Transparent Explainable Logic Layers. Sony AI. Online.
Posters & Presentations
- 2025 Distilling Rules from GIN: Logic-based Explanations for Graph Neural Networks. Workshop: L'explicabilité via le prisme de la décision algorithmique et des jeux. Sorbonne Université. Paris, France.
- 2022 Self-explainable Graph Neural Network for Molecular Property Prediction using Concept Whitening. 3rd MMCS: Shaping Medicinal Chemistry for the New Decade. Sapienza University of Rome. Rome, Italy.
- 2021 Semi-Supervised GCN for Learning Molecular Structure-Activity Relationships. ELLIS Machine Learning for Molecules Workshop (ML4Molecules). NeurIPS 2021. Online.
- 2021 Molecule Generation from Input-Attributions over Graph Convolutional Networks. ELLIS Machine Learning for Molecules Workshop (ML4Molecules). NeurIPS 2021. Online.
Certifications
- 2019 ACTFL OPIc: Advanced Mid
- 2012 Cambridge PET: B1
- 2010 DELF: A1