摘要
The increasing adoption of Artificial Intelligence (AI) in recruitment has introduced substantial ethical concerns, particularly regarding the propagation of unintended biases that may reinforce or exacerbate existing employment disparities. In response to these challenges, this study proposes an Explainable Artificial Intelligence (XAI) approach that transitions from opaque “black-box” algorithms to transparent “glass-box” frameworks, enabling greater interpretability and accountability. Specifically, we employ SHapley Additive exPlanations (SHAP) to examine and interpret decisions made by widely used machine learning models—namely Random Forest, XGBoost, and LightGBM—within the context of AI-driven hiring systems. Through extensive empirical analysis, SHAP values were utilized to uncover key features, such as gender, nationality, and recreational preferences, that contributed disproportionately to biased outcomes. These features were iteratively reviewed, modified, or excluded to construct fairer models. Following refinement, the updated models demonstrated a notable increase in classification accuracy, rising from approximately 78% to 85%. In addition, precision, recall, and F1-score metrics improved across all model variants. Most significantly, fairness metrics, measured through demographic parity, showed an approximate 20% improvement, increasing from 0.70 to 0.90, indicating a substantial reduction in discriminatory bias. These results provide strong empirical support for integrating XAI techniques in recruitment pipelines, demonstrating that fairness and accuracy need not be mutually exclusive. This study underscores the practical viability of explainable methods in bias-sensitive domains and offers a replicable framework for ethically aligned model development. Ultimately, our findings advocate for adopting XAI methodologies as an essential safeguard in ensuring equitable, transparent, and socially responsible deployment of AI technologies in employment decision-making.