METHODS: A double-blind quasi-experiment was carried out on NC (n = 43) and NCI (n = 33) groups. Participants in each group were randomly assigned into treatment and control programs groups. The treatment group underwent auditory-cognitive training, whereas the control group was assigned to watch documentary videos, three times per week, for 8 consecutive weeks. Study outcomes that included Montreal Cognitive Assessment, Malay Hearing in Noise Test, Dichotic Digit Test, Gaps in Noise Test and Pitch Pattern Sequence Test were measured at 4-week intervals at baseline, and weeks 4, 8 and 12.
RESULTS: Mixed design anova showed significant training effects in total Montreal Cognitive Assessment and Dichotic Digit Test in both groups, NC (P
METHODS: A double-blind, placebo-controlled, counter-balanced crossover design with permuted block randomisation for drug order was followed. Dexamphetamine (0.45 mg/kg, PO, q.d.) was administered to healthy participants. Phantom word illusion (speech illusion) and visual-induced flash illusion/VIFI (visual illusion) tests were measured to determine if TBWs were altered as a function of delay between stimuli presentations. Word emotional content for phantom word illusions was also analysed.
RESULTS: Dexamphetamine significantly increased the total number of phantom words/speech illusions (p
OBJECTIVE: The objective of our study was to investigate the use of movement sensor data from a smart watch to infer an individual's emotional state. We present our findings of a user study with 50 participants.
METHODS: The experimental design is a mixed-design study: within-subjects (emotions: happy, sad, and neutral) and between-subjects (stimulus type: audiovisual "movie clips" and audio "music clips"). Each participant experienced both emotions in a single stimulus type. All participants walked 250 m while wearing a smart watch on one wrist and a heart rate monitor strap on the chest. They also had to answer a short questionnaire (20 items; Positive Affect and Negative Affect Schedule, PANAS) before and after experiencing each emotion. The data obtained from the heart rate monitor served as supplementary information to our data. We performed time series analysis on data from the smart watch and a t test on questionnaire items to measure the change in emotional state. Heart rate data was analyzed using one-way analysis of variance. We extracted features from the time series using sliding windows and used features to train and validate classifiers that determined an individual's emotion.
RESULTS: Overall, 50 young adults participated in our study; of them, 49 were included for the affective PANAS questionnaire and 44 for the feature extraction and building of personal models. Participants reported feeling less negative affect after watching sad videos or after listening to sad music, P