Materials and Methods: Simple and complex sounds were used (pure tones and the naturally produced Malay consonant-vowels [CVs]) to evoke the cortical auditory-evoked potential (CAEP) signals. Moreover, this study analyzed the influence of related CAEP components that are distinct to the selected population and determined which of the ERP components among (CAEP) components is most affected by the two distinct stimuli. Moreover, the study used classification algorithms to discover the ability of the brain in distinguishing CAEP evoked by stimuli contrasts.
Results: The results showed some resemblance between our results and ERP waveforms outlined in previous studies conducted on native speakers of English. On the other hand, it was also observed that the P1 and N2 had a significant effect in amplitude due to different stimulus.
Conclusion: The results show high classification accuracy for the brain to distinguish auditory stimuli. Moreover, the results indicated some resemblance to previous studies conducted on native English speakers using similar tones and English CV stimuli. However, the amplitudes and latencies of the P1 were found to have a significant difference due to stimuli complexity.
MATERIALS AND METHODS: The EEG signal was used as a brain response signal, which was evoked by two auditory stimuli (Tones and Consonant Vowels stimulus). The study was carried out on Malaysians (Malay and Chinese) with normal hearing and with hearing loss. A ranking process for the subjects' EEG data and the nonlinear features was used to obtain the maximum classification accuracy.
RESULTS: The study formulated the classification of Normal Hearing Ethnicity Index and Sensorineural Hearing Loss Ethnicity Index. These indices classified the human ethnicity according to brain auditory responses by using numerical values of response signal features. Three classification algorithms were used to verify the human ethnicity. Support Vector Machine (SVM) classified the human ethnicity with an accuracy of 90% in the cases of normal hearing and sensorineural hearing loss (SNHL); the SVM classified with an accuracy of 84%.
CONCLUSION: The classification indices categorized or separated the human ethnicity in both hearing cases of normal hearing and SNHL with high accuracy. The SVM classifier provided a good accuracy in the classification of the auditory brain responses. The proposed indices might constitute valuable tools for the classification of the brain responses according to the human ethnicity.
METHODS: An improved Dempster-Shafer evidence theory (DST) based on Wasserstein distance and Deng entropy was proposed to reduce the conflicts among the results by combining the credibility degree between evidence and the uncertainty degree of evidence. To validate the effectiveness of the proposed method, examples were analyzed, and applied in a baby cry recognition. The Whale optimization algorithm-Variational mode decomposition (WOA-VMD) was used to optimally decompose the baby cry signals. The deep features of decomposed components were extracted using the VGG16 model. Long Short-Term Memory (LSTM) models were used to classify baby cry signals. An improved DST decision method was used to obtain the decision fusion.
RESULTS: The proposed fusion method achieves an accuracy of 90.15% in classifying three types of baby cry. Improvement between 2.90% and 4.98% was obtained over the existing DST fusion methods. Recognition accuracy was improved by between 5.79% and 11.53% when compared to the latest methods used in baby cry recognition.
CONCLUSION: The proposed method optimally decomposes baby cry signal, effectively reduces the conflict among the results of deep learning models and improves the accuracy of baby cry recognition.