.

ISSN 2063-5346
For urgent queries please contact : +918130348310

VGG-16 BASED INDO-PAKISTANI SIGN LANGUAGE INTERPRETER

Main Article Content

Dr. Ajitkumar Shitole1, Kunal Kulkarni2, Pratik Bathe3, Radhika Patil4, Sourabh Deshmane5
» doi: 10.48047/ecb/2022.12.10.657

Abstract

The use of machine learning algorithms such as Convolutional Neural Networks (CNN) with transfer learning can greatly improve the accuracy and efficiency of sign language recognition systems. Transfer learning involves using pre-trained models and fine-tuning them for a specific task, in this case, recognizing sign language gestures. This approach can reduce the need for large amounts of data and time-consuming training, making it easier and more cost-effective to develop such systems. The VGG-16 architecture is a popular choice for transfer learning in computer vision tasks, including sign language recognition. It has been shown to achieve high accuracy in image recognition tasks and can be adapted for use in sign language recognition systems. In addition to the CNN model, a user interface (UI) can be developed to interact with the system, allowing users to input their sign language gestures and receive a text or speech output. The UI can be designed to be user-friendly and accessible, ensuring that it is easy for both hearing and hearing-impaired individuals to use. Overall, the development of a real-time sign language recognition system using machine learning and a user interface can greatly improve accessibility and communication for individuals with hearing impairments. It can help bridge the communication gap between hearing and hearing-impaired individuals, ensuring that everyone has equal access to information and experiences. In summary, this

Article Details