The study of electroencephalography (EEG) signals is not a new topic. However, the analysis of human emotions upon exposure to music considered as important direction. Although distributed in various academic databases, research on this concept is limited. To extend research in this area, the researchers explored and analysed the academic articles published within the mentioned scope. Thus, in this paper a systematic review is carried out to map and draw the research scenery for EEG human emotion into a taxonomy. Systematically searched all articles about the, EEG human emotion based music in three main databases: ScienceDirect, Web of Science and IEEE Xplore from 1999 to 2016. These databases feature academic studies that used EEG to measure brain signals, with a focus on the effects of music on human emotions. The screening and filtering of articles were performed in three iterations. In the first iteration, duplicate articles were excluded. In the second iteration, the articles were filtered according to their titles and abstracts, and articles outside of the scope of our domain were excluded. In the third iteration, the articles were filtered by reading the full text and excluding articles outside of the scope of our domain and which do not meet our criteria. Based on inclusion and exclusion criteria, 100 articles were selected and separated into five classes. The first class includes 39 articles (39%) consists of emotion, wherein various emotions are classified using artificial intelligence (AI). The second class includes 21 articles (21%) is composed of studies that use EEG techniques. This class is named 'brain condition'. The third class includes eight articles (8%) is related to feature extraction, which is a step before emotion classification. That this process makes use of classifiers should be noted. However, these articles are not listed under the first class because these eight articles focus on feature extraction rather than classifier accuracy. The fourth class includes 26 articles (26%) comprises studies that compare between or among two or more groups to identify and discover human emotion-based EEG. The final class includes six articles (6%) represents articles that study music as a stimulus and its impact on brain signals. Then, discussed the five main categories which are action types, age of the participants, and number size of the participants, duration of recording and listening to music and lastly countries or authors' nationality that published these previous studies. it afterward recognizes the main characteristics of this promising area of science in: motivation of using EEG process for measuring human brain signals, open challenges obstructing employment and recommendations to improve the utilization of EEG process.
Deficits in the ability to process emotions characterize several neuropsychiatric disorders and are traits of Parkinson's disease (PD), and there is need for a method of quantifying emotion, which is currently performed by clinical diagnosis. Electroencephalogram (EEG) signals, being an activity of central nervous system (CNS), can reflect the underlying true emotional state of a person. This study applied machine-learning algorithms to categorize EEG emotional states in PD patients that would classify six basic emotions (happiness and sadness, fear, anger, surprise and disgust) in comparison with healthy controls (HC). Emotional EEG data were recorded from 20 PD patients and 20 healthy age-, education level- and sex-matched controls using multimodal (audio-visual) stimuli. The use of nonlinear features motivated by the higher-order spectra (HOS) has been reported to be a promising approach to classify the emotional states. In this work, we made the comparative study of the performance of k-nearest neighbor (kNN) and support vector machine (SVM) classifiers using the features derived from HOS and from the power spectrum. Analysis of variance (ANOVA) showed that power spectrum and HOS based features were statistically significant among the six emotional states (p < 0.0001). Classification results shows that using the selected HOS based features instead of power spectrum based features provided comparatively better accuracy for all the six classes with an overall accuracy of 70.10% ± 2.83% and 77.29% ± 1.73% for PD patients and HC in beta (13-30 Hz) band using SVM classifier. Besides, PD patients achieved less accuracy in the processing of negative emotions (sadness, fear, anger and disgust) than in processing of positive emotions (happiness, surprise) compared with HC. These results demonstrate the effectiveness of applying machine learning techniques to the classification of emotional states in PD patients in a user independent manner using EEG signals. The accuracy of the system can be improved by investigating the other HOS based features. This study might lead to a practical system for noninvasive assessment of the emotional impairments associated with neurological disorders.
Human emotion recognition has been a major field of research in the last decades owing to its noteworthy academic and industrial applications. However, most of the state-of-the-art methods identified emotions after analyzing facial images. Emotion recognition using electroencephalogram (EEG) signals has got less attention. However, the advantage of using EEG signals is that it can capture real emotion. However, very few EEG signals databases are publicly available for affective computing. In this work, we present a database consisting of EEG signals of 44 volunteers. Twenty-three out of forty-four are females. A 32 channels CLARITY EEG traveler sensor is used to record four emotional states namely, happy, fear, sad, and neutral of subjects by showing 12 videos. So, 3 video files are devoted to each emotion. Participants are mapped with the emotion that they had felt after watching each video. The recorded EEG signals are considered further to classify four types of emotions based on discrete wavelet transform and extreme learning machine (ELM) for reporting the initial benchmark classification performance. The ELM algorithm is used for channel selection followed by subband selection. The proposed method performs the best when features are captured from the gamma subband of the FP1-F7 channel with 94.72% accuracy. The presented database would be available to the researchers for affective recognition applications.