Current Volume 8
This project proposes a Real-Time Sign Language Translator that has been designed to provide meaningful communication between hearing-impaired community and the general public. Based on the techniques of real-time computer vision and deep learning, the system interprets hand gestures recorded using webcam and outputs them as related alphabets or words. The system utilizes MediaPipe for optimized hand landmark detection and TensorFlow for model training and classification. A custom image dataset was created and processed for training a convolutional neural network, with optimal performance under changing lighting and environmental conditions. At training, the model attained 100with low loss when tested. The graphical user interface provides real-time visual feedback through superimposition of the predicted output onto the live camera view. Designed to be lightweight and useable, the solution has future applications in inclusive communication environments such as classrooms, clinics, and customer service. The system not only addresses existing restricts sign language recognition but provides a foundation for future research in continuous gesture recognition and multi-language support.
Sign Language Recognition, Deep Learning, MediaPipe, TensorFlow, Real-Time Gesture Detection, Human- Computer Interaction
IRE Journals:
Dr P Rajasekar , Sottam Aich , Ashish Mishra , Aditya Sharma
"Real-Time Sign Language Recognition System using MediaPipe and Deep Learning" Iconic Research And Engineering Journals Volume 8 Issue 11 2025 Page 1398-1403
IEEE:
Dr P Rajasekar , Sottam Aich , Ashish Mishra , Aditya Sharma
"Real-Time Sign Language Recognition System using MediaPipe and Deep Learning" Iconic Research And Engineering Journals, 8(11)