Rainbow DQN for Intrusion Detection: A Unified Deep Reinforcement Learning Approach Across Benchmark Datasets
Main Article Content
Abstract
This paper presents Rainbow DQN for Intrusion Detection: A Unified Deep Reinforcement Learning Approach Across Benchmark Datasets, designed to address the increasing complexity of cyber threats. By incorporating advanced reinforcement learning components such as Double DQN, Dueling Networks, Prioritized Experience Replay, N-step Learning, Distributional RL, and Noisy Nets, the proposed model enhances both accuracy and stability in intrusion detection. The approach was rigorously evaluated on three benchmark datasets: CIC-IDS2017, KDD CUP 99, and UNSW-NB15. Experimental results demonstrated remarkable performance, achieving 99.8%, 98.4%, and 97.9% accuracy, respectively, with high precision, recall, and F1-scores across diverse attack categories. Compared with baseline DQ-IDS models, Rainbow DQN consistently improved detection capabilities and reduced false positives. These results highlight the robustness, adaptability, and generalizability of Rainbow DQN, making it a promising solution for next-generation intrusion detection systems in dynamic and heterogeneous network environments.