Deep Learning Research for Eye Sight Assisting Model

Authors

  • Hritik Valluvar Department of Computer Science and Engineering, SRM Institute of Science & Technology, Lucknow, India
  • Utkarsh Shukla Department of Computer Science and Engineering, SRM Institute of Science & Technology, Lucknow, India
  • Nirav Agarwal Department of Computer Science and Engineering, SRM Institute of Science & Technology, Lucknow, India

Keywords:

CNN, Google Text-to-Speech API, LSTM, NLP, Xception

Abstract

According to the World Health Organization (WHO), 285 million people are visually impaired worldwide, which is nearly 3.6% of the world population. Every year, several blind people lose their lives in accidents. This paper will help in building bridges between visually impaired people and their surrounding environment. This paper proposes a model which generates descriptions of an image with the help of the robust combination of Xception model and LSTM (Long short term memory, a type of RNN). We have implemented a deep neural network recurrent architecture that automatically produces a short description of images. Our models use a CNN (Convolution Neural Network) to extract image features. We then feed these features into the NLP (Natural Language Processing) to generate a description of the image.

Downloads

Download data is not yet available.

Downloads

Published

01-01-2021

How to Cite

[1]
H. Valluvar, U. Shukla, and N. Agarwal, “Deep Learning Research for Eye Sight Assisting Model”, IJRESM, vol. 3, no. 12, pp. 168–171, Jan. 2021.

Issue

Section

Articles