Sign Language Recognition with Convolutional Neural Network

Authors

  • S. Mohamed Hussain Kani UG Student, Department of Information Technology, Aalim Muhammed Salegh College of Engineering, Chennai, India
  • S. Abdullah UG Student, Department of Information Technology, Aalim Muhammed Salegh College of Engineering, Chennai, India
  • U. Sheik Amanullah UG Student, Department of Information Technology, Aalim Muhammed Salegh College of Engineering, Chennai, India
  • G. Divya Assistant Professor, Department of Information Technology, Aalim Muhammed Salegh College of Engineering, Chennai, India

Keywords:

Sign language, Convolutional network, Datasets

Abstract

This abstract presents an overview of sign language recognition using CNNs. CNNs, a type of deep learning model specialized in image analysis and pattern recognition, are well-suited for sign language recognition due to their ability to extract relevant visual features and learn complex patterns. The process begins with a comprehensive dataset of sign language gestures, covering diverse handshapes, gestures, facial expressions, and movements. This dataset is used to train the CNN model, enabling it to recognize and classify different sign language gestures based on extracted visual features. During recognition, input sign language gestures are captured using cameras or video input. The captured data undergoes preprocessing to enhance its quality, and then is fed into the trained CNN model. The CNN model analyzes the visual features of the input and performs gesture classification, identifying the specific sign language gesture being performed.

Downloads

Download data is not yet available.

Downloads

Published

27-05-2023

Issue

Section

Articles

How to Cite

[1]
S. M. H. Kani, S. Abdullah, U. S. Amanullah, and G. Divya, “Sign Language Recognition with Convolutional Neural Network”, IJRESM, vol. 6, no. 5, pp. 116–119, May 2023, Accessed: Apr. 20, 2024. [Online]. Available: https://journal.ijresm.com/index.php/ijresm/article/view/2708