Affiliations 

  • 1 Faculty of Creative Arts, Music Department, University of Malaya, Kuala Lumpur, Malaysia
PeerJ Comput Sci, 2023;9:e1472.
PMID: 37547395 DOI: 10.7717/peerj-cs.1472

Abstract

Music can serve as a potent tool for conveying emotions and regulating learners' moods, while the systematic application of emotional assessment can help to improve teaching efficiency. However, existing music emotion analysis methods based on Artificial Intelligence (AI) rely primarily on pre-marked content, such as lyrics and fail to adequately account for music signals' perception, transmission, and recognition. To address this limitation, this study first employs sound-level segmentation, data frame processing, and threshold determination to enable intelligent segmentation and recognition of notes. Next, based on the extracted audio features, a Radial Basis Function (RBF) model is utilized to construct a music emotion classifier. Finally, correlation feedback was used to label the classification results further and train the classifier. The study compares the music emotion classification method commonly used in Chinese music education with the Hevner emotion model. It identifies four emotion categories: Quiet, Happy, Sad, and Excited, to classify performers' emotions. The testing results demonstrate that audio feature recognition time is a mere 0.004 min, with an accuracy rate of over 95%. Furthermore, classifying performers' emotions based on audio features is consistent with conventional human cognition.

* Title and MeSH Headings from MEDLINE®/PubMed®, a database of the U.S. National Library of Medicine.