Assistive communication technologies have significantly improved the quality of life for individuals with speech and motor impairments. However, existing systems often rely on unimodal inputs such as text or voice, limiting their usability in real-world scenarios. This paper proposes a deep learning-based multimodal assistive communication framework that integrates visual, auditory, and textual inputs to enable robust and adaptive communication. The framework leverages convolutional neural networks (CNNs), recurrent neural networks (RNNs), and transformer-based architectures for feature extraction and fusionExperimental results demonstrate improved accuracy, responsiveness, and usability compared to traditional systems.The proposed system aims to provide an inclusive, scalable, and real- time communication solution. and few lines Furthermore, an attention-based multimodal fusion strategy is employed to effectively combine heterogeneous features and improve contextual understanding. The system is designed to operate reliably under noisy and dynamic conditions by handling incomplete or ambiguous inputs from different modalities. User-centric design considerations are incorporated to ensure accessibility, ease of use, and real-time responsiveness Experimental results demonstrate improved accuracy, responsiveness, and usability compared to traditional unimodal systems. The proposed framework shows strong potential for deployment in real-world assistive applications, providing an inclusive, scalable, and intelligent communication solution.
Assistive Communication, Multimodal Learning, Deep Learning, CNN, RNN, Transformer, Human-Computer Interaction
IRE Journals:
M Nithya, V Priya, S Rajeswari, S Harini "Assistive Communication Framework" Iconic Research And Engineering Journals Volume 9 Issue 10 2026 Page 2031-2035 https://doi.org/10.64388/IREV9I10-1716490
IEEE:
M Nithya, V Priya, S Rajeswari, S Harini
"Assistive Communication Framework" Iconic Research And Engineering Journals, 9(10) https://doi.org/10.64388/IREV9I10-1716490