The Growing Importance of Explainable Artificial Intelligence (XAI): Analyzing the Need for Transparent and Interpretable Machine Learning Models in High-Stakes Domains

Authors

  • Muhammad Nadeem Author
  • Muhammad Kashif Shaikh Author
  • Ayesha Urooj Author
  • Muhammad Asad Abbasi Author
  • Kashif Mughal Author

DOI:

https://doi.org/10.63075/zfz88982

Abstract

With AI and machine learning being applied in serious-risk domains like health care, business, and legal systems, the requirement for XAI has escalated. This research examines the extent to which users understand, trust, and rely on model explanations and the effect of model explainability through a mixed-methods evaluation of black-box models, post-hoc explanation methods, and inherently interpretable models in three high-risk applications. Data collected from 120 participants and real-world datasets were used to evaluate explanation fidelity, accuracy of comprehension, and both perceived trust and clarity in the explanation. The results showed that the inherently interpretable models were superior in all the examined aspects to both the black-box models as well as the post-hoc explained models Regarding the fidelity, the average value of the interpreted models was 0.893, while the level of comprehension achieved by the users was 84%. The mean trust and clarity were found to be significantly higher in the case of interpretable models, and this report proves that there is a direct positive influence of interpretability on trust and ethical acceptability. In addition, the author qualitatively assessed the users’ feedback and discovered that they favor materials with real example-based and interactive features. In one way, it shows that it is better to include explainability in the model than using reverse approximation. Incorporating the findings of this study, it is underlined that explainability is tightly intertwined with technical accuracy and human trust; this outlines that ensuring human-oriented AI applications are a necessity rather than a luxury in high-stake environments.

Downloads

Download data is not yet available.

Downloads

Published

2025-05-06

Issue

Section

Computer Science

How to Cite

The Growing Importance of Explainable Artificial Intelligence (XAI): Analyzing the Need for Transparent and Interpretable Machine Learning Models in High-Stakes Domains. (2025). Annual Methodological Archive Research Review, 3(5), 110-127. https://doi.org/10.63075/zfz88982

Most read articles by the same author(s)