Displaying publications 1 - 20 of 38 in total

Abstract:
Sort:
  1. Ami M, Abdullah A, Awang MA, Liyab B, Saim L
    Laryngoscope, 2008 Apr;118(4):712-7.
    PMID: 18176342 DOI: 10.1097/MLG.0b013e318161e521
    To investigate cochlear outer hair cell function based on distortion product otoacoustic emission (DPOAE) in patients with tinnitus.
    Matched MeSH terms: Auditory Perception/physiology*
  2. Zilany MS, Bruce IC, Carney LH
    J Acoust Soc Am, 2014 Jan;135(1):283-6.
    PMID: 24437768 DOI: 10.1121/1.4837815
    A phenomenological model of the auditory periphery in cats was previously developed by Zilany and colleagues [J. Acoust. Soc. Am. 126, 2390-2412 (2009)] to examine the detailed transformation of acoustic signals into the auditory-nerve representation. In this paper, a few issues arising from the responses of the previous version have been addressed. The parameters of the synapse model have been readjusted to better simulate reported physiological discharge rates at saturation for higher characteristic frequencies [Liberman, J. Acoust. Soc. Am. 63, 442-455 (1978)]. This modification also corrects the responses of higher-characteristic frequency (CF) model fibers to low-frequency tones that were erroneously much higher than the responses of low-CF model fibers in the previous version. In addition, an analytical method has been implemented to compute the mean discharge rate and variance from the model's synapse output that takes into account the effects of absolute refractoriness.
    Matched MeSH terms: Auditory Perception*
  3. Zubair S, Fisal N
    Sensors (Basel), 2014;14(5):8996-9026.
    PMID: 24854362 DOI: 10.3390/s140508996
    The need for implementing reliable data transfer in resource-constrained cognitive radio ad hoc networks is still an open issue in the research community. Although geographical forwarding schemes are characterized by their low overhead and efficiency in reliable data transfer in traditional wireless sensor network, this potential is still yet to be utilized for viable routing options in resource-constrained cognitive radio ad hoc networks in the presence of lossy links. In this paper, a novel geographical forwarding technique that does not restrict the choice of the next hop to the nodes in the selected route is presented. This is achieved by the creation of virtual clusters based on spectrum correlation from which the next hop choice is made based on link quality. The design maximizes the use of idle listening and receiver contention prioritization for energy efficiency, the avoidance of routing hot spots and stability. The validation result, which closely follows the simulation result, shows that the developed scheme can make more advancement to the sink as against the usual decisions of relevant ad hoc on-demand distance vector route select operations, while ensuring channel quality. Further simulation results have shown the enhanced reliability, lower latency and energy efficiency of the presented scheme.
    Matched MeSH terms: Auditory Perception
  4. Adam MS, Por LY, Hussain MR, Khan N, Ang TF, Anisi MH, et al.
    Sensors (Basel), 2019 Aug 29;19(17).
    PMID: 31470520 DOI: 10.3390/s19173732
    Many receiver-based Preamble Sampling Medium Access Control (PS-MAC) protocols have been proposed to provide better performance for variable traffic in a wireless sensor network (WSN). However, most of these protocols cannot prevent the occurrence of incorrect traffic convergence that causes the receiver node to wake-up more frequently than the transmitter node. In this research, a new protocol is proposed to prevent the problem mentioned above. The proposed mechanism has four components, and they are Initial control frame message, traffic estimation function, control frame message, and adaptive function. The initial control frame message is used to initiate the message transmission by the receiver node. The traffic estimation function is proposed to reduce the wake-up frequency of the receiver node by using the proposed traffic status register (TSR), idle listening times (ILTn, ILTk), and "number of wake-up without receiving beacon message" (NWwbm). The control frame message aims to supply the essential information to the receiver node to get the next wake-up-interval (WUI) time for the transmitter node using the proposed adaptive function. The proposed adaptive function is used by the receiver node to calculate the next WUI time of each of the transmitter nodes. Several simulations are conducted based on the benchmark protocols. The outcome of the simulation indicates that the proposed mechanism can prevent the incorrect traffic convergence problem that causes frequent wake-up of the receiver node compared to the transmitter node. Moreover, the simulation results also indicate that the proposed mechanism could reduce energy consumption, produce minor latency, improve the throughput, and produce higher packet delivery ratio compared to other related works.
    Matched MeSH terms: Auditory Perception
  5. Uesaki M, Ashida H, Kitaoka A, Pasqualotto A
    Sci Rep, 2019 10 08;9(1):14440.
    PMID: 31595003 DOI: 10.1038/s41598-019-50912-8
    Changes in the retinal size of stationary objects provide a cue to the observer's motion in the environment: Increases indicate the observer's forward motion, and decreases backward motion. In this study, a series of images each comprising a pair of pine-tree figures were translated into auditory modality using sensory substitution software. Resulting auditory stimuli were presented in an ascending sequence (i.e. increasing in intensity and bandwidth compatible with forward motion), descending sequence (i.e. decreasing in intensity and bandwidth compatible with backward motion), or in a scrambled order. During the presentation of stimuli, blindfolded participants estimated the lengths of wooden sticks by haptics. Results showed that those exposed to the stimuli compatible with forward motion underestimated the lengths of the sticks. This consistent underestimation may share some aspects with visual size-contrast effects such as the Ebbinghaus illusion. In contrast, participants in the other two conditions did not show such magnitude of error in size estimation; which is consistent with the "adaptive perceptual bias" towards acoustic increases in intensity and bandwidth. In sum, we report a novel cross-modal size-contrast illusion, which reveals that auditory motion cues compatible with listeners' forward motion modulate haptic representations of object size.
    Matched MeSH terms: Auditory Perception
  6. Abdul Rauf A. Bakar, Jayasree Santhosh, Mohammed G. Al-zidi, Ibrahim Amer Ibrahim, Ng SC, Hua NT
    Sains Malaysiana, 2017;46:2477-2488.
    The deficiency in the human auditory system of individuals suffering from sensorineural hearing loss (SNHL) is known to be associated with the difficulty in detecting of various speech phonological features that are frequently related to speech perception. This study investigated the effects of speech articulation features on the amplitude and latency of cortical auditory evoked potential (CAEP) components. The speech articulation features included the placing contrast and voicing contrast. 12 Malay subjects with normal hearing and 12 Malay subjects with SNHL were recruited for the study. The CAEPs response recorded at higher amplitude with longer latency when stimulated by voicing contrast cues compared to that of the placing contrast. Subjects with SNHL elicited greater amplitude with prolonged latencies in the majority of the CAEP components in both speech stimuli. The existence of different frequency spectral and time-varying acoustic cues of the speech stimuli was reflected by the CAEPs response strength and timing. We anticipate that the CAEPs responses could equip audiologist and clinicians with useful knowledge, concerning the potential deprivation experience by hearing impaired individuals, in auditory passive perception. This would help to determine what type of speech stimuli that might be useful in measuring speech perception abilities, especially in Malay Malaysian ethic group, for choosing a better rehabilitation program, since no such study conducted for evaluating speech perception among Malaysian clinical population.
    Matched MeSH terms: Auditory Perception
  7. Majid A, Roberts SG, Cilissen L, Emmorey K, Nicodemus B, O'Grady L, et al.
    Proc Natl Acad Sci U S A, 2018 Nov 06;115(45):11369-11376.
    PMID: 30397135 DOI: 10.1073/pnas.1720419115
    Is there a universal hierarchy of the senses, such that some senses (e.g., vision) are more accessible to consciousness and linguistic description than others (e.g., smell)? The long-standing presumption in Western thought has been that vision and audition are more objective than the other senses, serving as the basis of knowledge and understanding, whereas touch, taste, and smell are crude and of little value. This predicts that humans ought to be better at communicating about sight and hearing than the other senses, and decades of work based on English and related languages certainly suggests this is true. However, how well does this reflect the diversity of languages and communities worldwide? To test whether there is a universal hierarchy of the senses, stimuli from the five basic senses were used to elicit descriptions in 20 diverse languages, including 3 unrelated sign languages. We found that languages differ fundamentally in which sensory domains they linguistically code systematically, and how they do so. The tendency for better coding in some domains can be explained in part by cultural preoccupations. Although languages seem free to elaborate specific sensory domains, some general tendencies emerge: for example, with some exceptions, smell is poorly coded. The surprise is that, despite the gradual phylogenetic accumulation of the senses, and the imbalances in the neural tissue dedicated to them, no single hierarchy of the senses imposes itself upon language.
    Matched MeSH terms: Auditory Perception/physiology*
  8. Tan SS, Maul TH, Mennie NR
    PLoS One, 2013;8(5):e63042.
    PMID: 23696791 DOI: 10.1371/journal.pone.0063042
    Visual to auditory conversion systems have been in existence for several decades. Besides being among the front runners in providing visual capabilities to blind users, the auditory cues generated from image sonification systems are still easier to learn and adapt to compared to other similar techniques. Other advantages include low cost, easy customizability, and universality. However, every system developed so far has its own set of strengths and weaknesses. In order to improve these systems further, we propose an automated and quantitative method to measure the performance of such systems. With these quantitative measurements, it is possible to gauge the relative strengths and weaknesses of different systems and rank the systems accordingly.
    Matched MeSH terms: Auditory Perception/physiology*
  9. Maamor N, Billings CJ
    Neurosci Lett, 2017 01 01;636:258-264.
    PMID: 27838448 DOI: 10.1016/j.neulet.2016.11.020
    The purpose of this study was to determine the effects of noise type, signal-to-noise ratio (SNR), age, and hearing status on cortical auditory evoked potentials (CAEPs) to speech sounds. This helps to explain the hearing-in-noise difficulties often seen in the aging and hearing impaired population. Continuous, modulated, and babble noise types were presented at varying SNRs to 30 individuals divided into three groups according to age and hearing status. Significant main effects of noise type, SNR, and group were found. Interaction effects revealed that the SNR effect varies as a function of noise type and is most systematic for continuous noise. Effects of age and hearing loss were limited to CAEP latency and were differentially modulated by energetic and informational-like masking. It is clear that the spectrotemporal characteristics of signals and noises play an important role in determining the morphology of neural responses. Participant factors such as age and hearing status, also play an important role in determining the brain's response to complex auditory stimuli and contribute to the ability to listen in noise.
    Matched MeSH terms: Auditory Perception/physiology
  10. Dewey RS, Francis ST, Guest H, Prendergast G, Millman RE, Plack CJ, et al.
    Neuroimage, 2020 01 01;204:116239.
    PMID: 31586673 DOI: 10.1016/j.neuroimage.2019.116239
    In animal models, exposure to high noise levels can cause permanent damage to hair-cell synapses (cochlear synaptopathy) for high-threshold auditory nerve fibers without affecting sensitivity to quiet sounds. This has been confirmed in several mammalian species, but the hypothesis that lifetime noise exposure affects auditory function in humans with normal audiometric thresholds remains unconfirmed and current evidence from human electrophysiology is contradictory. Here we report the auditory brainstem response (ABR), and both transient (stimulus onset and offset) and sustained functional magnetic resonance imaging (fMRI) responses throughout the human central auditory pathway across lifetime noise exposure. Healthy young individuals aged 25-40 years were recruited into high (n = 32) and low (n = 30) lifetime noise exposure groups, stratified for age, and balanced for audiometric threshold up to 16 kHz fMRI demonstrated robust broadband noise-related activity throughout the auditory pathway (cochlear nucleus, superior olivary complex, nucleus of the lateral lemniscus, inferior colliculus, medial geniculate body and auditory cortex). fMRI responses in the auditory pathway to broadband noise onset were significantly enhanced in the high noise exposure group relative to the low exposure group, differences in sustained fMRI responses did not reach significance, and no significant group differences were found in the click-evoked ABR. Exploratory analyses found no significant relationships between the neural responses and self-reported tinnitus or reduced sound-level tolerance (symptoms associated with synaptopathy). In summary, although a small effect, these fMRI results suggest that lifetime noise exposure may be associated with central hyperactivity in young adults with normal hearing thresholds.
    Matched MeSH terms: Auditory Perception/physiology*
  11. Yusuf, A.N., Abdul Hamid, K., Mohamad, M., Abd hamid, A.I.
    Medicine & Health, 2008;3(2):300-317.
    MyJurnal
    In this study, functional magnetic resonance imaging (fMRI) is used to investigate func-tional specialisation in human auditory cortices during listening. A silent fMRI paradigm was used to reduce the scanner sound artefacts on functional images. The subject was instructed to pay attention to the white noise stimulus binaurally given at an inten-sity level of 70 dB higher than the hearing level for normal people. Functional speciali-sation was studied using the Matlab-based Statistical Parametric Mapping (SPM5) software by means of fixed effects (FFX), random effects (RFX) and conjunction analyses. Individual analyses on all subjects indicated asymmetrical bilateral activation of the left and right hemispheres in Brodmann areas (BA) 22, 41 and 42, involving the primary and secondary auditory cortices. The percentage of signal change is larger in the BA22, 41 and 42 on the right as compared to the ones on the left (p>0.05). The average number of activated voxels in all the respective Brodmann areas are higher in the right hemisphere than in the left (p>0.05). FFX results showed that the point of maximum intensity was in the right BA41 whereby 599±1 activated voxels were ob-served in the right temporal lobe as compared to 485±1 in the left temporal lobe. The RFX results were consistent with that of FFX. The analysis of conjunction which fol-lowed, showed that the right BA41 and left BA22 as the common activated areas in all subjects. The results confirmed the specialisation of the right auditory cortices in pro-cessing non verbal stimuli.
    Matched MeSH terms: Auditory Perception
  12. Nurul Izzah Wahidul Azam, Amir Muhriz Abdul Latiff, Chandra Kannan Thanapalan, Raja Mohamad Alif Raja Adnan
    MyJurnal
    Introduction: Restricted and repetitive behaviours (RRBs) is one of autism spectrum disorders (ASD) core criteria. Exhibitions of RRBs produce profound implications on the functional aspect of these children and family. Evidence found that RRBs is related to the reward system dysfunction in the basal ganglia of these children. RRBs induces intrinsically rewarding effects on children with ASD. Listening to music was found to influence the reward system on the typical population and also discover to be promising as complementary strategies for ASD. A study found that high functioning adolescents with ASD cognitively stimulated through listening to happy music. Planning inter-vention for RRBs by looking towards the mechanism of reward system function remained unexplored. The primary objectives of this study is to examine the effect of happy music on RRBs symptoms. Methods: This study will use a randomised control trial research design with pre-test and post-test assessments in 20 children with ASD. Two parallel randomly assigned group will undergo twelve weeks of intervention sessions. The experimental group will listen to happy music and engage in free play sessions. For the control group, they will engage in free play session only without the music. Parents will complete the Repetitive Behaviour Scale-Revised, which consists of 6 subscales on RRBs to measure the outcome of the study. Results: The study will compare the RRBs between two groups. Con-clusion: Outcome of this study may set forth further investigation on the management of RRB using non-aversive contemporary approach.
    Matched MeSH terms: Auditory Perception
  13. Mohamad, M., Yusoff, A.N., Mukari, S.Z.M., Abdullah, A., Abd Hamid, A.I.
    MyJurnal
    This study was carried out to investigate the effects of noisy background on brain activation during a working memory task. Fourteen healthy male subjects underwent silent functional Magnetic Resonance Imaging (fMRI) scans while listening to words presented verbally against quiet (WIS) and noisy (WIN) backgrounds. The stimuli were binaurally presented to the subjects at 70 dB sound pressure level (SPL) in both conditions. Group results indicated significant (p < 0.001) bilateral widespread of brain activations in the primary auditory cortex, superior temporal gyrus, inferior frontal gyrus, supramarginal gyrus and inferior parietal lobes during WIS. Additional significant activation was observed in the middle cingulate cortex and anterior cingulate cortex during WIN, suggesting the involvement of cingulate cortex in working memory processing against a noisy background. The mean percentage of signal change in all regions was higher during WIN as compared to WIS. Right hemispheric predominance was observed for both conditions in primary auditory cortex and middle frontal gyrus and this could be attributed to the increased difficulty of the tasks. The results obtained from this study demonstrated that background noise increased task demand and difficulty. Task demand was found to play an important role in determining the activation magnitude in the brain areas during working memory task.
    Matched MeSH terms: Auditory Perception
  14. Abdul Latif R, Idrus MF
    MyJurnal DOI: 10.37134/jsspj.vol9.2.1.2020
    Music can give influence on a lot of thing. It was known as one of the sources for entertainment. It has been classified to regulate emotion, grab attention, for lift the spirit and increased work output. Nowadays, people love listening to music believed that it entertains them and thus helps to motivate the person to continue the activity. The aim of this study was to determine the differences music tempo towards emotion among gym users in UiTM Seremban 3. Sixty participants, which were gym users that attended to the gym in UiTM Seremban 3. Subjects were randomly assigned into three different groups (n=20 in each). Group 1 fast tempo (>120 bpm), group2 slow tempo (
    Matched MeSH terms: Auditory Perception
  15. Cila Umat, Nahazatul Islia Jamari
    MyJurnal
    The study examined the use of linguistic contextual cues among native, Malay-speaking normal hearing young adults. Ten undergraduate students of Universiti Kebangsaan Malaysia participated in the study. All subjects had normal hearing with the average hearing threshold levels for the overall left and the right ears of 7.8 dB (SD 4.1). The Malay Hearing in Noise Test (MyHINT) materials were employed and presented to the subjects at an approximately 65 dBA presentation level. Testing was conducted in a sound field in three different listening conditions: in quiet, in noise with +5 dB signal-to-noise ratio (SNR) and 0 dB SNR. In every test condition, three lists of MyHINT were administered to each subject. The magnitude of context effects was measured using the j factor, which was derived from measurements of recognition probabilities for whole sentences (13,) and the constituent words in the sentences (PP) in which j = log P./ log P P. Results showed that all subjects scored 100% identification of words in sentences and whole sentences in quiet listening condition, while subjects' performances in 0 dB SNR were significantly poorer than that in quiet and in +5 dB SNR (p < 0.001). The j-values were significantly correlated with the probability of recognizing words in the sentences (r = 0.515, p = 0.029) in which lower j values were associated with lower P ps. Subjects were not significantly different from each other in their use of contextual cues in adverse listening conditions [F(9, 7) = 1.34, p = 0.359]. Using the linear regression function for j on word recognition probabilities, the predicted P. were calculated. It was found that the predicted and measured probabilities of recognizing whole sentences were highly correlated: r = 0.973, p < 0.001. The results suggested that linguistic contextual information become increasingly important for recognition of sentences by normal hearing young adult listeners as SNR deteriorates.
    Matched MeSH terms: Auditory Perception
  16. Ahmad Nazlim Yusoff, Mazlyfarina Mohamad, Mohd Mahadir Ayob, Mohd Harith Hashim
    MyJurnal
    A functional magnetic resonance imaging (fMRI) study was conducted on 4 healthy male and female subjects to investigate brain activation during passive and active listening. Two different experimental conditions were separately used in this study. The first condition requires the subjects to listen to a simple arithmetic instruction (e.g. one-plus-two-plus-three-plus-four) - passive listening. In the second condition, the subjects were given the same series of arithmetic instruction and were required to listen and perform the calculation - active listening. The data were then analysed using the Statistical Parametric Mapping (SPM5) and the MATLAB 7.4 (R2007a) programming softwares. The results obtained from the fixed (FFX) and random effects analyses (RFX) show that the active-state signal intensity was significantly higher (p < 0.05) than the resting-state signal intensity for both conditions. The results also indicate significant differences (p < 0.001) in brain activation between passive and active listening. The activated cortical regions during passive listening, as obtained from the FFX of the first condition is symmetrical in the left and right temporal and frontal lobes covering the cortical auditory areas. However, for the second condition, which was active listening, more activation occurs in the left hemisphere with a reduction in the number of activated voxels and their signal intensity in the right hemisphere. Activation mainly occurs in the middle temporal gyrus, precentral gyrus, middle frontal gyrus, superior temporal gyrus and several other areas in the frontal lobes. The point of maximum signal intensity has been shifted to a new coordinates during active listening. It is also observed that the magnetic resonance signal intensity and the number of activated voxel in the right and left superior temporal lobes for the second condition have been reduced as compared to that of the first condition. The results obtained strongly suggest the existence of functional specialisation. The results also indicate different networks for the two conditions. These networks clearly pertain to the existence of functional connectivity between activation areas during listening and listening while performing a simple arithmetic task.
    Matched MeSH terms: Auditory Perception
  17. Ahmad Nazlim Yusoff, Khairiah Abdul Hamid, Farah Nabila Ab Rahman, Mazlyfarina Mohamad, Khairiah Abdul Hamid, Siti Zamratol-Mai Sarah Mukari
    MyJurnal
    In this study, the asymmetry of the main effects of action, background and tonal frequency during a pitch memory processing
    were investigated by means of brain activation. Eighteen participants (mean age 27.6 years) were presented with low and
    high frequency tones in quiet and in noise. They listen, discriminate and recognize the target tone against the final tone
    in a series of four distracting tones. The main effects were studied using the analysis of variance (ANOVA) with action (to
    wring (rubber bulb) vs. not to wring), background (in quiet vs. in noise) and frequency (low vs. high) as the factors (and
    levels respectively). The main effect of action is in the right pre-central gyrus (PCG), in conformation with its contralateral
    behavior. The main effect of background indicated the bilateral primary auditory cortices (PAC) and is right lateralized,
    attributable to white noise. The main effect of frequency is also observed in PAC but bilaterally equal and attributable to
    low frequency tones. Despite the argument that the temporo-spectral lateralization dichotomy is not especially rigid as
    revealed by the main effect of frequency, right lateralization of PAC for the respective main effect of background clearly
    demonstrates its functional asymmetry suggesting different perceptual functionality of the right and left PAC.
    Matched MeSH terms: Auditory Perception
  18. Valentini A, Ricketts J, Pye RE, Houston-Price C
    J Exp Child Psychol, 2018 03;167:10-31.
    PMID: 29154028 DOI: 10.1016/j.jecp.2017.09.022
    Reading and listening to stories fosters vocabulary development. Studies of single word learning suggest that new words are more likely to be learned when both their oral and written forms are provided, compared with when only one form is given. This study explored children's learning of phonological, orthographic, and semantic information about words encountered in a story context. A total of 71 children (8- and 9-year-olds) were exposed to a story containing novel words in one of three conditions: (a) listening, (b) reading, or (c) simultaneous listening and reading ("combined" condition). Half of the novel words were presented with a definition, and half were presented without a definition. Both phonological and orthographic learning were assessed through recognition tasks. Semantic learning was measured using three tasks assessing recognition of each word's category, subcategory, and definition. Phonological learning was observed in all conditions, showing that phonological recoding supported the acquisition of phonological forms when children were not exposed to phonology (the reading condition). In contrast, children showed orthographic learning of the novel words only when they were exposed to orthographic forms, indicating that exposure to phonological forms alone did not prompt the establishment of orthographic representations. Semantic learning was greater in the combined condition than in the listening and reading conditions. The presence of the definition was associated with better performance on the semantic subcategory and definition posttests but not on the phonological, orthographic, or category posttests. Findings are discussed in relation to the lexical quality hypothesis and the availability of attentional resources.
    Matched MeSH terms: Auditory Perception*
  19. Dzulkarnain AAA, Azizi AK, Sulaiman NH
    J Taibah Univ Med Sci, 2020 Dec;15(6):495-501.
    PMID: 33318741 DOI: 10.1016/j.jtumed.2020.08.007
    Objective: This study aims to investigate the auditory sensory gating capacity in Huffaz using an auditory brainstem response (ABR) test with and without psychological tasks.

    Methods: Twenty-three participants were recruited for this study. The participants were comprised of 11 Huffaz who memorized 30 chapters of the Islamic Scripture (from the Quran) and 12 non-Huffaz as the control group. All participants had normal hearing perception and underwent an ABR test with and without psychological tasks. The ABR was elicited at 70 dB nHL using a 3000 Hz tone burst stimulus with a 2-0-2 cycle at a stimulus repetition rate of 40 Hz. The ABR wave V amplitude and latencies were measured and statistically compared. A forward digit span test was also conducted to determine participants' working memory capacity.

    Results: There were no significant differences in the ABR wave V amplitudes and latencies between Huffaz and non-Huffaz in ABR with and without psychological tasks. There were also no significant differences in the ABR wave V amplitudes and latencies in both groups of ABR with and without psychological tasks. In addition, no significant differences were identified in the digit span working memory score between both groups.

    Conclusions: In this study, based on the ABR findings, Huffaz showed the same auditory sensory gating capacity as the non-Huffaz group. The ABR result was consistent with the digit span working memory test score. This finding implies that both groups have similar working memory performance. However, the conclusion is limited to the specific assessment method that we used in this study.

    Matched MeSH terms: Auditory Perception
  20. Quiroz JC, Geangu E, Yong MH
    JMIR Ment Health, 2018 Aug 08;5(3):e10153.
    PMID: 30089610 DOI: 10.2196/10153
    BACKGROUND: Research in psychology has shown that the way a person walks reflects that person's current mood (or emotional state). Recent studies have used mobile phones to detect emotional states from movement data.

    OBJECTIVE: The objective of our study was to investigate the use of movement sensor data from a smart watch to infer an individual's emotional state. We present our findings of a user study with 50 participants.

    METHODS: The experimental design is a mixed-design study: within-subjects (emotions: happy, sad, and neutral) and between-subjects (stimulus type: audiovisual "movie clips" and audio "music clips"). Each participant experienced both emotions in a single stimulus type. All participants walked 250 m while wearing a smart watch on one wrist and a heart rate monitor strap on the chest. They also had to answer a short questionnaire (20 items; Positive Affect and Negative Affect Schedule, PANAS) before and after experiencing each emotion. The data obtained from the heart rate monitor served as supplementary information to our data. We performed time series analysis on data from the smart watch and a t test on questionnaire items to measure the change in emotional state. Heart rate data was analyzed using one-way analysis of variance. We extracted features from the time series using sliding windows and used features to train and validate classifiers that determined an individual's emotion.

    RESULTS: Overall, 50 young adults participated in our study; of them, 49 were included for the affective PANAS questionnaire and 44 for the feature extraction and building of personal models. Participants reported feeling less negative affect after watching sad videos or after listening to sad music, P

    Matched MeSH terms: Auditory Perception
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links