Investigating Data Leakage?Induced Over-Confidence and Explanation Faithfulness in Transformer-Based Text and Audio Models
  • Author(s): Rahul Vakiti
  • Paper ID: 1713617
  • Page: 1481-1493
  • Published Date: 20-01-2026
  • Published In: Iconic Research And Engineering Journals
  • Publisher: IRE Journals
  • e-ISSN: 2456-8880
  • Volume/Issue: Volume 9 Issue 7 January-2026
Abstract

Transformer-based models now sit at the center of modern artificial intelligence, powering systems that read, listen, respond, and increasingly act on behalf of humans. From large-scale language models to voice-driven assistants, these systems exhibit a level of fluency and responsiveness that would have seemed implausible only a few years ago. Their success has been driven largely by scale Larger models, larger datasets, and longer training regimes resulting in impressive performance across text and audio tasks.

Citations

IRE Journals:
Rahul Vakiti "Investigating Data Leakage?Induced Over-Confidence and Explanation Faithfulness in Transformer-Based Text and Audio Models" Iconic Research And Engineering Journals Volume 9 Issue 7 2026 Page 1481-1493 https://doi.org/10.64388/IREV9I7-1713617

IEEE:
Rahul Vakiti "Investigating Data Leakage?Induced Over-Confidence and Explanation Faithfulness in Transformer-Based Text and Audio Models" Iconic Research And Engineering Journals, 9(7) https://doi.org/10.64388/IREV9I7-1713617