A MATHEMATICAL FRAMEWORK FOR EXPLAINABLE AND ADVERSARIALLY ROBUST IDS USING ML FOR LARGE-SCALE ENTERPRISE AND CLOUD SYSTEMS

Main Article Content

Nilesh Dnyaneshwar Bhandarwar

Abstract

 


Intrusion Detection Systems (IDS) have become increasingly critical for protecting large-scale enterprise and cloud infrastructures from sophisticated cyber threats. However, traditional IDS implementations face significant challenges including lack of interpretability, vulnerability to adversarial attacks, and scalability issues in distributed environments. This research proposes a novel mathematical framework that integrates explainable artificial intelligence (XAI) techniques with adversarially robust machine learning models for enhanced intrusion detection. The framework incorporates SHAP (SHapley Additive exPlanations) values for model interpretability and adversarial training mechanisms to defend against evasion attacks. We evaluated our approach using the CICIDS2017 and NSL-KDD datasets, demonstrating superior detection accuracy of 98.7% while maintaining resilience against FGSM and PGD adversarial perturbations. The proposed framework achieved a 23% improvement in adversarial robustness compared to baseline models while providing meaningful explanations for security analysts. Our experimental results indicate that the integration of mathematical rigor with explainability significantly enhances both the reliability and trustworthiness of machine learning-based intrusion detection systems in production environments.

Article Details

Section
Articles