List of Tutorials
Tutorial 1: Explainable, Interpretable & Trustworthy AI for Next-Generation Network Autonomy
Speakers
Associate Prof. Murat Karakus, Department of Software Engineering, Ankara University, Turkiye
Assist. Prof. Rukiye Savran Kızıltepe, Department of Software Engineering, Ankara University, Turkiye
Fatih Bildirici, ASELSAN, Ankara, Turkiye
Berkay Bayramoglu, Department of Software Engineering, Ankara University, Turkiye
Duration: Half-day
Abstract:
AI-driven automation is rapidly transforming modern network and service operations, yet the opaque nature of AI
models continues to limit trustworthiness, adoption, and safe
decision-making in next-generation network environments.
This tutorial provides a structured, hands-on introduction to
Explainable, Interpretable, and Trustworthy AI (XAI & IAI)
for trusted network autonomy, operations, and management.
We explore the foundations of explainability, including key
model-agnostic and model-aware techniques such as LIME,
SHAP, Integrated Gradients, surrogate modeling, counterfactual reasoning, and causal approaches. We demonstrate how
these techniques enable debugging, assurance, and operator
confidence across complex network ecosystems.
This tutorial delivers a focused exploration of XAI theory for network autonomy, operations, and management. It
introduces core interpretability concepts and widely used techniques, such as LIME, SHAP, Integrated Gradients, surrogate
models, counterfactuals, and causal reasoning, building on
seminal XAI research. The session connects these methods
to emerging XAI-for-Networks work across O-RAN, 6G, IoT
ecosystems, intrusion detection, and knowledge-driven automation, presenting a taxonomy of telecom-centric use cases
that span anomaly triage, SLA diagnosis, resource allocation,
and validation of closed-loop actions.
Participants will engage in hands-on demonstrations using synthetic and anonymized KPI/telemetry datasets, exploring explainability challenges across O-RAN, AIOps/MLOps
pipelines, and upcoming regulatory frameworks. The tutorial
concludes by outlining future research, including XAI for
LLM-based agents, real-time and distributed explainability,
and by highlighting how explainability can be systematically
embedded into trustworthy autonomous network management.
By connecting explainability with autonomy, this tutorial
equips researchers and operators with the conceptual tools and
practical skills to design, evaluate, and deploy trustworthy AI
systems for next-generation AI-native networks.
Outline of the Tutorial Content (Total: 180 minutes)
1) Introduction (10-minute - Murat Karakus¸): Overview
of the evolution toward autonomous network operations, risks of opaque AI-driven decisions, and regulatory/standardization context.
2) Background on AI/ML for Network Operations (15-
minute - Murat Karakus¸):
a) AI/ML models in telecom: time-series, deep learning,
GNNs.
b) Operational failure modes due to black-box AI.
c) Challenges in AIOps observability and root-cause tracing.
3) Foundations of XAI (20-minute - Murat Karakus¸):
Core XAI axes—interpretability vs. explainability, intrinsic vs. post-hoc, and local vs. global—are outlined with key metrics (fidelity, stability, usability), stressing explainability’s necessity for trustworthy autonomous networks.
4) Core XAI Methods - Part I (20-minute - Fatih Bildirici):
a) Local vs. global explanations.
b) LIME
c) SHAP
5) Core XAI Methods - Part II (25-minute - Fatih Bildirici):
a) Integrated Gradients.
b) Surrogate models.
c) Counterfactual reasoning.
d) Causal XAI approaches.
6) XAI Applications in Network Operations (25-minute- Rukiye Savran Kızıltepe):
a) Interpretable anomaly detection and alarm triage (traffic classification and intrusion detection).
b) KPI degradation / SLA explanation.
c) O-RAN xApp/rApp validation and XAI in 6G ORAN.
d) Safe autonomous control loops and trust boundaries.
7) XAI Use Cases and Hands-On Demonstrations (30-minute - Berkay Bayramoglu):
a) Demo: Explainability-driven anomaly prioritization and failure identification.
b) Demo: Interpretable traffic forecasting and mobility prediction.
c) Demo: Intent-based networking validation using explanation outputs.
8) Future Directions and Research Challenges (15-minute - Rukiye Savran Kızıltepe):
a) XAI for LLM-based network agents and decision pipelines.
b) Real-time explainability constraints.
c) Distributed and multi-agent explainability in Federated and Edge Learning.
d) XAI in IoT-scale environments.
e) RAG for Network AI Agents.
f) LLM-Based Telemetry Summary and Explanation.
g) Synthetic Data and Digital Twins for XAI.
h) Standardization opportunities (ETSI, O-RAN) and trustworthy autonomous management.
9) Conclusions - Q&A (10-minute - All): Summary of insights, industrial implications, and next steps.
Attendees
Countries
Published papers




