SECURE AND EXPLAINABLE AI FOR REMOTE PATIENT MONITORING: A DEFENSE FRAMEWORK AGAINST EMERGING AI-DRIVEN HEALTHCARE CYBERCRIMES
Main Article Content
Abstract
Remote Patient Monitoring (RPM) systems have also developed at a very fast pace having adopted artificial intelligence (AI), which can be used to carry out continuous clinical monitoring, the early detection of anomalies, and the proactive decision-making. Nonetheless, the identical AI-based capabilities have generated new security gaps that can be used to perpetrate specific cybercrimes, data manipulation, and clinical interference. The paper, based on a common defence structure, proposes adversarial threat modelling, multimodal anomaly detection, and explainable artificial intelligence (XAI) to reduce the emergent AI-based attacks in the RPM ecosystems. The model was evaluated on a hybrid dataset that consisted of simulated adversarial samples, real RPM telemetry, and synthetic physiological signals. To provide integrity, transparency, and robustness, the proposed architecture combines a secure AI pipeline with the concepts of differential privacy, federated learning, and audit trails supported by blockchain. The findings show a substantial decrease in the success rates of adversarial attacks (up to 87 percent), a higher sensitivity of anomaly detection (93.2 percent), and a better interpretability capacity toward clinical decision-making. The results highlight the need to implement security-by-design, interpretable AI models, and adaptive threat intelligence in the future to address the changing cybercrimes that are AI-enabled. The study offers practical recommendations in the areas of safe RPM creation, healthcare cybersecurity policies and regulations.