Mobile-Based Human Emotion Recognition Based on Speech and Heart Rate

Abstract

Mobile-based human emotion recognition is a very challenging subject, most of the approaches suggested and built in this field utilized various contexts that can be derived from the external sensors and the smartphone, but these approaches suffer from different obstacles and challenges. The proposed system integrated human speech signal and heart rate, in one system, to leverage the accuracy of the human emotion recognition. The proposed system is designed to recognize four human emotions; angry, happy, sad, and normal. In this system, the smartphone is used to record user speech and send it to a server. The smartwatch, fixed on user's wrist, is used to measure user heart rate while the user is speaking and send it, via Bluetooth, to the smartphone, which in turn sends it to the server. At the server side, the speech features are extracted from the speech signal to be classified by a neural network. To minimize the misclassification of the neural network, the user heart rate measurement is used to direct the extracted speech features to either excited (angry and happy) neural network or to the calm (sad and normal) neural network. In spite of the challenges associated with the system, the system achieved 96.49% for known speakers and 79.05% for unknown speakers