Face Emotion and Audio Analysis using Machine Learning
Keywords:
Artificial Intelligence, Machine Learning, Recurrent Neural Network, Transfer learningAbstract
Human emotions are mental states of feelings that arise spontaneously rather than through conscious effort and are accompanied by physiological changes in them which implies changes in their audio and way of speech. Some of critical emotions are Normal, happy, anger, sadness, fear and enthusiast. Emotion recognition from audio signal requires Audio feature extraction and visualizations, Training the model for accuracy calculation, Implementation process of CNN model, and classification of speech emotions. The feature vector consists of elements of the audio signal which characterize speaker specific features such as tone, pitch, energy, which is crucial to train the classifier model to recognize a particular emotion accurately. In order to computer modeling of human's emotion, a plenty of research has been accomplished. But still it is far behind from human vision system. In this paper, we are providing better approach to predict human emotions (Frames by Frames) using deep Convolution Neural Network (CNN) and how emotion intensity changes on the audio from low level to high level of emotion. In this algorithm, FERC-2013 database has been applied for training. The assessment through the proposed experiment confers quite good result and obtained accuracy may give encouragement to the researchers for future model of computer based emotion recognition system.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2021 A. Ashwin Siva, C. Infan Chelsea, K. Kishore, A. Thiyagarajan
This work is licensed under a Creative Commons Attribution 4.0 International License.