Interpretable AI Techniques for Enhancing Transparency in Algorithmic Decision-Making for Public Welfare Services
  • Author(s): Kingsley Wisdom Akhibi
  • Paper ID: 1713026
  • Page: 1560-1570
  • Published Date: 22-12-2025
  • Published In: Iconic Research And Engineering Journals
  • Publisher: IRE Journals
  • e-ISSN: 2456-8880
  • Volume/Issue: Volume 9 Issue 6 December-2025
Abstract

The increasing utilization of artificial intelligence in public welfare services has transformed how eligibility is determined, resources are allocated, and benefits are distributed by governments in ways that promise gains along the dimensions of efficiency, consistency, and scalability. Yet, the deployment of algorithmic decision-making in high-stakes welfare contexts has also generated significant challenges concerning transparency, accountability, fairness, and citizen trust, particularly when opaque or so-called "black-box" models are deployed. This journal critically investigates the role of interpretable AI in meeting these challenges in public welfare systems. Through the use of interdisciplinary literature related to machine learning interpretability and human-centered explanation design, administrative law, and public-sector governance, this research examines how superior operational efficiency can be balanced with democratic accountability through transparent and explainable AI systems. It discusses inherently interpretable model architectures and post-hoc explanation techniques, as well as the design of human-readable explanations that fit different welfare recipients, while assessing a number of methods for evaluating explanations regarding their quality, bias, and even fairness. Situating interpretability within broader governance and legal frames, the paper further underlines its importance for appeal rights, institutional accountability, and procedural justice. Synthesizing technical, social, and regulatory perspectives, the current research demonstrates that interpretability needs to be embedded by design in welfare AI rather than considered as an auxiliary feature. It, therefore, concludes that interpretable, human-centered AI is a necessary component in ensuring that algorithmic welfare decisions will not only be accurate and efficient but also socially legitimate, fair, and trustworthy.

Keywords

Artificial Intelligence in Public Welfare; Algorithmic Decision-Making; Interpretable Artificial Intelligence; Explainable AI (XAI); Human-Centered Explanations; Public Sector AI Governance; Welfare Eligibility and Allocation Systems; Algorithmic Fairness and Bias; Citizen Trust and Procedural Justice; Ethical AI in Government; Administrative Law and Automated Decision Systems

Citations

IRE Journals:
Kingsley Wisdom Akhibi "Interpretable AI Techniques for Enhancing Transparency in Algorithmic Decision-Making for Public Welfare Services" Iconic Research And Engineering Journals Volume 9 Issue 6 2025 Page 1560-1570

IEEE:
Kingsley Wisdom Akhibi "Interpretable AI Techniques for Enhancing Transparency in Algorithmic Decision-Making for Public Welfare Services" Iconic Research And Engineering Journals, 9(6)