
Deep learning has revolutionized many fields, but caused the 'black-box' problem, where model prediction is not interpretable and transparent. Explainable Artificial Intelligence (XAI) attempts to overcome this problem with the help of Interpretability and Transparency in AI systems. We review important XAI methods focusing on LIME, SHAP and saliency maps that explain the elements behind model predictions. The paper discusses about the role of Explainable Artificial Intelligence (XAI) in high-stake fields such as healthcare, finance and autonomous systems, emphasizing on why trust is important for these sectors and how they help adhere to regulations while promoting ethical AI use. Despite the promise of Explainable Artificial Intelligence (XAI) in promoting transparency, challenges persist, including standardization of interpretability metrics and some users may have difficulty associating their rationales to transparent forms. The study highlights the need for XAI frameworks that are not only robust but also scalable so as to provide a bridge between complex AI systems and their deployment in society. In the end, it is XAI that enables us to use AI in a responsible way in the most critical domains of our modern lives by creating an atmosphere of accountability, fair treatment and trust
Authors: Nabil Imam, Ibrahim Abubakar, Mohit Tiwari
DOI: https://doi.org/10.9790/0661-2606012936
Publish Year: 2024