EXPLAINABLE ARTIFICIAL INTELLIGENCE APPLICATIONS IN CYBERSECURITY: ENHANCING TRANSPARENCY IN INTRUSION DETECTION SYSTEMS
Main Article Content
Abstract
The increasing sophistication of cyberattacks and the rapid expansion of cloud, IoT, and distributed network environments have accelerated the adoption of Artificial Intelligence (AI) for intrusion detection and threat analysis. While AI-based Intrusion Detection Systems (IDS) offer superior accuracy and adaptability compared to traditional rule-based methods, they suffer from a critical limitation: the lack of transparency in their decision-making processes. This “black-box” nature reduces trust, complicates incident investigation, and hinders regulatory compliance. Explainable Artificial Intelligence (XAI) addresses these challenges by providing interpretable, human-understandable insights into how AI models detect anomalies, classify threats, and differentiate benign from malicious behaviour. This paper presents a comprehensive study of XAI applications in cybersecurity with a focus on enhancing the transparency of AI-driven IDS. It reviews existing literature, analyzes key XAI methods—including SHAP, LIME, surrogate models, and counterfactual reasoning—and introduces a structured XAI-enabled IDS framework integrating local and global interpretability, multi-modal data analysis, and analyst-centric visualization. Real-world applications such as threat detection, malware analysis, insider threat monitoring, fraud detection, root cause analysis, and compliance auditing are discussed to demonstrate XAI’s practical impact. The paper concludes by highlighting open challenges and future research opportunities, emphasizing the need for real-time, scalable, privacy-preserving, and adversarial-resilient XAI solutions to enable trustworthy and operationally effective cyber defense systems.