EXPLAINABLE AI IN FLEET PREDICTIVE MAINTENANCE: USING SHAP AND LIME FOR TRANSPARENCY AND REGULATORY ALIGNMENT

Main Article Content

Vrushali Parate , Rohith Kumar Punithavel , Saloni Agrawal, Ayush Jaiswal

Abstract

​​Machine learning (ML) is present everywhere in today's fleet management, driving all aspects from predictive maintenance to driver risk profiling. A paramount issue surfaces: the "black box" aspect of such models. This lack of transparency presents a major hurdle for regulators, fleet owners, and other non-technical stakeholders who must trust and verify the fairness and safety of such automated decisions. This paper serves the dual purpose of advancing predictive maintenance performance and addressing the critical gap of interpretability in high-performing but opaque ML models. We resolve this by proposing a framework that integrates state-of-the-art Explainable AI (XAI) techniques for transparency and compliance. We demonstrate the framework with a use case example: vehicle maintenance needs prediction from a real-world telematics dataset. Our contribution is threefold: first, we show a system architecture that integrates an explainability layer directly into the fleet ML pipeline. Second, we experimentally compare three widely used models: XGBoost, Random Forest, and TabNet, using SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations) for generating both global and local explanations. Third, we provide interpretations of explanations from a regulatory perspective, accompanied by step-by-step examples of how these explanations can be used to answer fundamental questions about model behavior, feature importance, and potential biases. Our key findings are that SHAP delivers powerful global and local explanations and that LIME generates insightful, human-understandable explanations of individual predictions. In our experiment, the XGBoost model achieved an F1-score of 0.77 with a recall of 0.72, reducing projected unplanned maintenance cost by over 55% compared to less interpretable models. By implementing XAI, we provide a clear path toward rendering opaque fleet management systems transparent and compliant operations that satisfy regulatory mandates for safety, fairness, and accountability.

Article Details

Section
Articles