Multi-lingual translation using images has emerged as a powerful approach to bridge linguistic barriers in real-time communication. This paper presents a system that automatically extracts text from images and translates it into multiple target languages using a combination of Optical Character Recognition (OCR) and Neural Machine Translation (NMT) models. The proposed framework captures an input image, preprocesses it to enhance text visibility, and applies OCR to accurately detect and extract textual content across diverse scripts. A deep learning–based translation engine then converts the extracted text into user-selected languages while preserving contextual meaning. The system supports multiple languages, including English and various regional Indian languages, enabling seamless cross-lingual understanding. Experimental results demonstrate high accuracy in text detection and translation, even under challenging conditions such as noisy backgrounds, varying fonts, and low illumination. This work contributes to the development of intelligent, user-friendly translation tools suitable for education, tourism, document digitization, and assistive technologies.
IRE Journals:
R Lohith, Sangamesh D Chidri, Yashwanth K, Ganeshan "Multi-Lingual Translation using Image" Iconic Research And Engineering Journals Volume 9 Issue 6 2025 Page 432-439 https://doi.org/10.64388/IREV9I6-1712718
IEEE:
R Lohith, Sangamesh D Chidri, Yashwanth K, Ganeshan
"Multi-Lingual Translation using Image" Iconic Research And Engineering Journals, 9(6) https://doi.org/10.64388/IREV9I6-1712718