Methods: Twenty-three participants were recruited for this study. The participants were comprised of 11 Huffaz who memorized 30 chapters of the Islamic Scripture (from the Quran) and 12 non-Huffaz as the control group. All participants had normal hearing perception and underwent an ABR test with and without psychological tasks. The ABR was elicited at 70 dB nHL using a 3000 Hz tone burst stimulus with a 2-0-2 cycle at a stimulus repetition rate of 40 Hz. The ABR wave V amplitude and latencies were measured and statistically compared. A forward digit span test was also conducted to determine participants' working memory capacity.
Results: There were no significant differences in the ABR wave V amplitudes and latencies between Huffaz and non-Huffaz in ABR with and without psychological tasks. There were also no significant differences in the ABR wave V amplitudes and latencies in both groups of ABR with and without psychological tasks. In addition, no significant differences were identified in the digit span working memory score between both groups.
Conclusions: In this study, based on the ABR findings, Huffaz showed the same auditory sensory gating capacity as the non-Huffaz group. The ABR result was consistent with the digit span working memory test score. This finding implies that both groups have similar working memory performance. However, the conclusion is limited to the specific assessment method that we used in this study.
OBJECTIVE: The objective of our study was to investigate the use of movement sensor data from a smart watch to infer an individual's emotional state. We present our findings of a user study with 50 participants.
METHODS: The experimental design is a mixed-design study: within-subjects (emotions: happy, sad, and neutral) and between-subjects (stimulus type: audiovisual "movie clips" and audio "music clips"). Each participant experienced both emotions in a single stimulus type. All participants walked 250 m while wearing a smart watch on one wrist and a heart rate monitor strap on the chest. They also had to answer a short questionnaire (20 items; Positive Affect and Negative Affect Schedule, PANAS) before and after experiencing each emotion. The data obtained from the heart rate monitor served as supplementary information to our data. We performed time series analysis on data from the smart watch and a t test on questionnaire items to measure the change in emotional state. Heart rate data was analyzed using one-way analysis of variance. We extracted features from the time series using sliding windows and used features to train and validate classifiers that determined an individual's emotion.
RESULTS: Overall, 50 young adults participated in our study; of them, 49 were included for the affective PANAS questionnaire and 44 for the feature extraction and building of personal models. Participants reported feeling less negative affect after watching sad videos or after listening to sad music, P