Tri-Control |
Author(s): |
| Chetan Kharade , Parvatibai Genba Moze College of Engineering, Pune; Sambhaji Devkate, Parvatibai Genba Moze College of Engineering, Pune; Kirti Kulkarni, Parvatibai Genba Moze College of Engineering, Pune; Amruta Golde, Parvatibai Genba Moze College of Engineering, Pune; Priyanka Kumbhar, Parvatibai Genba Moze College of Engineering, Pune |
Keywords: |
| Multimodal Human-Computer Interaction, Accessible Computing Interface, Eye Movement Tracking System, Gesture Recognition via Sign Language, Voice-Controlled Computing |
Abstract |
|
This paper introduces a comprehensive multimodal system designed for accessible computer control, incorporating eye movement tracking, gesture recognition based on sign language, and voice commands. The system aims to provide a hands-free, user-friendly interface that significantly enhances digital accessibility for users with disabilities, such as motor impairments or vision-related limitations, as well as users seeking efficient multitasking solutions. Implemented using Python and associated libraries (OpenCV, TensorFlow, SpeechRecognition), the platform integrates real-time computer vision, machine learning, and speech recognition technologies. The result is a unified interaction model that empowers users to perform cursor movements, clicks, and various computing tasks with precision and convenience. Extensive testing demonstrated accuracy levels above 90% across different modalities. The system promotes digital inclusion and provides a foundation for scalable assistive technology. |
Other Details |
|
Paper ID: IJSRDV13I40059 Published in: Volume : 13, Issue : 4 Publication Date: 01/07/2025 Page(s): 70-72 |
Article Preview |
|
|
|
|
