Transformer-based models now sit at the center of modern artificial intelligence, powering systems that read, listen, respond, and increasingly act on behalf of humans. From large-scale language models to voice-driven assistants, these systems exhibit a level of fluency and responsiveness that would have seemed implausible only a few years ago. Their success has been driven largely by scale Larger models, larger datasets, and longer training regimes resulting in impressive performance across text and audio tasks.
IRE Journals:
Rahul Vakiti "Investigating Data Leakage?Induced Over-Confidence and Explanation Faithfulness in Transformer-Based Text and Audio Models" Iconic Research And Engineering Journals Volume 9 Issue 7 2026 Page 1481-1493 https://doi.org/10.64388/IREV9I7-1713617
IEEE:
Rahul Vakiti
"Investigating Data Leakage?Induced Over-Confidence and Explanation Faithfulness in Transformer-Based Text and Audio Models" Iconic Research And Engineering Journals, 9(7) https://doi.org/10.64388/IREV9I7-1713617