Comparative Study of CNN Based Sign Language Translation Models

Authors

  • Aashi Upadyay Student, Department of Information Technology, MIT School of Engineering, MIT ADT University, Pune, India
  • Akash Kashyap Student, Department of Information Technology, MIT School of Engineering, MIT ADT University, Pune, India
  • Shounak Pattewale Student, Department of Information Technology, MIT School of Engineering, MIT ADT University, Pune, India
  • Taha Bohra Student, Department of Information Technology, MIT School of Engineering, MIT ADT University, Pune, India
  • Reetika Kerketta Professor, Department of Information Technology, MIT School of Engineering, MIT ADT University, Pune, India

Keywords:

ASL, CNN, Hand gesture, Segmentation

Abstract

More than 1 million people around the world use American Sign Language. 98% of deaf individuals do not obtain sign language schooling and 70% are unemployed or underemployed. This is all due to a communication breakdown, which may be remedied with the application of technology and machine learning. This research will allow a sign language speaker and a conventional speaker to converse in a seamless manner. The task of image segmentation is to train a neural network to output a pixel-wise mask of the image. This helps in understanding the image at a much lower level, the pixel level. With the help of image segmentation, we will be training a machine to recognize a hand gesture as a sign language alphabet or word. The neural network used in the study is CNN. This deep neural network algorithm aims to solve instance segmentation problem in machine learning and computer vision. In this study, we compare the most widely used algorithm in classical Machine Learning and Deep Learning to classify the hand gestures taken from live feed input as American Sign Language. The dataset used are American Sign Language alphabets and American Sign Language basic salutation.

Downloads

Download data is not yet available.

Downloads

Published

29-06-2022

Issue

Section

Articles

How to Cite

[1]
A. Upadyay, A. Kashyap, S. Pattewale, T. Bohra, and R. Kerketta, “Comparative Study of CNN Based Sign Language Translation Models”, IJRESM, vol. 5, no. 6, pp. 289–291, Jun. 2022, Accessed: Dec. 21, 2024. [Online]. Available: https://journal.ijresm.com/index.php/ijresm/article/view/2221