In recent years, the field of recommendation systems has witnessed significant advancements, with explainable recommendation systems gaining prominence as a crucial area of research. These systems aim to enhance user experience by providing transparent and compelling recommendations, accompanied by explanations. However, a persistent challenge lies in addressing biases that can influence the recommendations and explanations offered by these systems. Such biases often stem from a tendency to favor popular items and generate explanations that highlight their common attributes, thereby deviating from the objective of delivering personalized recommendations and explanations. While existing debiasing methods have been applied in explainable recommendation systems, they often overlook the model-generated explanations in tackling biases. Consequently, biases in model-generated explanations may persist, potentially compromising system performance and user satisfaction.