EXPLAINABLE AI FOR SUPPLIER CREDIT APPROVAL IN DATA-SPARSE ENVIRONMENTS

Main Article Content

Md Sakibul Hasan, Tanaya Jakir, Arat Hossain, MD Tushar Khan, Kazi Sharmin Sultana, Md Abdul Ahad, Md Nazmul Shakir Rabbi, Mohotasim Billah, MD Saifur Rahman, Md Alal Udden, Sangida Jahan Ripa

Abstract

This study addresses the problem of approving supplier credit in U.S. environments where structured financial histories are sparse and traditional scorecards perform poorly. We develop a hybrid explainable AI framework that blends simple domain heuristics with machine learning, augments inputs with graph-derived supply chain signals, incorporates probabilistic Bayesian modeling to surface calibrated uncertainty, and applies few-shot transfer learning to share strength across related supplier groups. Explanations are produced at multiple levels: intrinsically interpretable models provide rule-based logic, SHAP and LIME deliver local and global feature attributions, counterfactual search identifies concrete changes that could flip a decision, and plain-language narratives communicate outcomes to suppliers and credit officers. In experiments designed to mimic sparse real-world conditions, the hybrid and graph-augmented approaches consistently improve predictive performance and reliability compared with naive baselines. Bayesian models identify high-uncertainty cases that benefit from manual review, and layered explainability produces actionable insights that are meaningful to both lenders and suppliers. The results suggest a practical path for deploying transparent credit decisioning in low-data settings where fairness, auditability, and supplier guidance are essential, provided the system is paired with oversight, fraud detection, and ongoing monitoring to manage risks arising from sparse signals and potential gaming.

Article Details

Section
Articles