It’s usually difficult for people with hearing impairments to communicate with those who don’t know sign language. Sign language recognition systems are attempts to fix this problem, and they do so by using computer vision and machine learning. This particular paper describes a way to detect sign language in real time, using MediaPipe and machine learning to understand hand movements and turn them into text you can read. The system uses a webcam to ‘see’ the hand signs and MediaPipe’s hand tracking to find specific points on the hand. The positions of these points on the hand become the data used to train a machine learning model to recognize many different sign language gestures. The system can work out what’s being signed almost as it happens, it is very accurate, and it doesn’t need a super powerful computer. Therefore, it’s a cheap and effective method to help deaf and mute people communicate, and could be used in teaching or as a tool to help with daily life.
Sign Language Recognition, MediaPipe, Machine Learning, Computer Vision, Hand Gesture Detection, Real-Time Detection
IRE Journals:
Gladis Keziah A., Monisha V., Vinniammal B. "Real-Time Sign Language Detection System Using MediaPipe and Machine Learning" Iconic Research And Engineering Journals Volume 9 Issue 10 2026 Page 1022-1024 https://doi.org/10.64388/IREV9I10-1716012
IEEE:
Gladis Keziah A., Monisha V., Vinniammal B.
"Real-Time Sign Language Detection System Using MediaPipe and Machine Learning" Iconic Research And Engineering Journals, 9(10) https://doi.org/10.64388/IREV9I10-1716012