COMPARATIVE ANALYSIS OF AI AND XAI TECHNIQUES ACROSS DIVERSE PERFORMANCE METRICS ON VARIED DATASETS
Main Article Content
Abstract
This study paper compares Artificial Intelligence (AI) and Explainable Artificial Intelligence (XAI) methods by looking at how well they work in different areas using the "Machine Failure Prediction using Sensor Data" dataset. The goal is to find out how different models work and how XAI can make models easier to understand and more open. The dataset used has both features that have been preprocessed using standard scalar normalisation and data that has been balanced using SMOTE. This makes sure that the model training is strong and fair. For this study, a lot of testing was done with different machine learning classifiers, such as the Gradient Boosting Classifier, the Isolation Forest, the Support Vector Machine (SVM), the Optimised Isolation Forest (OIF), and mixed models like the 1D CNN-OIF and CNN-LSTM. Standard performance measures were used to judge each model. These were based on exploratory data analysis (EDA) and a feature selection process that looked for features that had an association with the goal label of more than 0.3. XAI methods, such as SHAP (SHapley Additive Explanations) and LIME (Local Interpretable Model-agnostic Explanations), were used to understand the AI models, giving details about the features that went into making the models and the choices they made. Accuracy and loss curves, confusion vectors, and a comparison of model outcomes were used to look at the data. The combined CNN-LSTM model did better than the others in this study in terms of accuracy and ease of use, making it the chosen model for further research. Thanks to the use of XAI methods, this model not only did a better job of making predictions, but it also gave us a lot of information about how decisions are made.