Displaying publications 1 - 20 of 54 in total

Abstract:
Sort:
  1. Jalaei B, Shaabani M, Zakaria MN
    Braz J Otorhinolaryngol, 2017 Jan-Feb;83(1):10-15.
    PMID: 27102175 DOI: 10.1016/j.bjorl.2015.12.005
    INTRODUCTION: The performance of auditory steady state response (ASSR) in threshold testing when recorded ipsilaterally and contralaterally, as well as at low and high modulation frequencies (MFs), has not been systematically studied.

    OBJECTIVE: To verify the influences of mode of recording (ipsilateral vs. contralateral) and modulation frequency (40Hz vs. 90Hz) on ASSR thresholds.

    METHODS: Fifteen female and 14 male subjects (aged 18-30 years) with normal hearing bilaterally were studied. Narrow-band CE-chirp(®) stimuli (centerd at 500, 1000, 2000, and 4000Hz) modulated at 40 and 90Hz MFs were presented to the participants' right ear. The ASSR thresholds were then recorded at each test frequency in both ipsilateral and contralateral channels.

    RESULTS: Due to pronounced interaction effects between mode of recording and MF (p<0.05 by two-way repeated measures ANOVA), mean ASSR thresholds were then compared among four conditions (ipsi-40Hz, ipsi-90Hz, contra-40Hz, and contra-90Hz) using one-way repeated measures ANOVA. At the 500 and 1000Hz test frequencies, contra-40Hz condition produced the lowest mean ASSR thresholds. In contrast, at high frequencies (2000 and 4000Hz), ipsi-90Hz condition revealed the lowest mean ASSR thresholds. At most test frequencies, contra-90Hz produced the highest mean ASSR thresholds.

    CONCLUSIONS: Based on the findings, the present study recommends two different protocols for an optimum threshold testing with ASSR, at least when testing young adults. This includes the use of contra-40Hz recording mode due to its promising performance in hearing threshold estimation.
    Matched MeSH terms: Acoustic Stimulation/methods*
  2. Jalaei B, Azmi MHAM, Zakaria MN
    Braz J Otorhinolaryngol, 2018 05 17;85(4):486-493.
    PMID: 29858160 DOI: 10.1016/j.bjorl.2018.04.005
    INTRODUCTION: Binaurally evoked auditory evoked potentials have good diagnostic values when testing subjects with central auditory deficits. The literature on speech-evoked auditory brainstem response evoked by binaural stimulation is in fact limited. Gender disparities in speech-evoked auditory brainstem response results have been consistently noted but the magnitude of gender difference has not been reported.

    OBJECTIVE: The present study aimed to compare the magnitude of gender difference in speech-evoked auditory brainstem response results between monaural and binaural stimulations.

    METHODS: A total of 34 healthy Asian adults aged 19-30 years participated in this comparative study. Eighteen of them were females (mean age=23.6±2.3 years) and the remaining sixteen were males (mean age=22.0±2.3 years). For each subject, speech-evoked auditory brainstem response was recorded with the synthesized syllable /da/ presented monaurally and binaurally.

    RESULTS: While latencies were not affected (p>0.05), the binaural stimulation produced statistically higher speech-evoked auditory brainstem response amplitudes than the monaural stimulation (p<0.05). As revealed by large effect sizes (d>0.80), substantive gender differences were noted in most of speech-evoked auditory brainstem response peaks for both stimulation modes.

    CONCLUSION: The magnitude of gender difference between the two stimulation modes revealed some distinct patterns. Based on these clinically significant results, gender-specific normative data are highly recommended when using speech-evoked auditory brainstem response for clinical and future applications. The preliminary normative data provided in the present study can serve as the reference for future studies on this test among Asian adults.

    Matched MeSH terms: Acoustic Stimulation/methods*
  3. Hu S, Anschuetz L, Hall DA, Caversaccio M, Wimmer W
    Trends Hear, 2021 3 6;25:2331216520986303.
    PMID: 33663298 DOI: 10.1177/2331216520986303
    Residual inhibition, that is, the temporary suppression of tinnitus loudness after acoustic stimulation, is a frequently observed phenomenon that may have prognostic value for clinical applications. However, it is unclear in which subjects residual inhibition is more likely and how stable the effect of inhibition is over multiple repetitions. The primary aim of this work was to evaluate the effect of hearing loss and tinnitus chronicity on residual inhibition susceptibility. The secondary aim was to investigate the short-term repeatability of residual inhibition. Residual inhibition was assessed in 74 tinnitus subjects with 60-second narrow-band noise stimuli in 10 consecutive trials. The subjects were assigned to groups according to their depth of suppression (substantial residual inhibition vs. comparator group). In addition, a categorization in normal hearing and hearing loss groups, related to the degree of hearing loss at the frequency corresponding to the tinnitus pitch, was made. Logistic regression was used to identify factors associated with susceptibility to residual inhibition. Repeatability of residual inhibition was assessed using mixed-effects ordinal regression including poststimulus time and repetitions as factors. Tinnitus chronicity was not associated with residual inhibition for subjects with hearing loss, while a statistically significant negative association between tinnitus chronicity and residual inhibition susceptibility was observed in normal hearing subjects (odds ratio: 0.63; p = .0076). Moreover, repeated states of suppression can be stably induced, reinforcing the use of residual inhibition for within-subject comparison studies.
    Matched MeSH terms: Acoustic Stimulation
  4. Zakaria MN, Salim R, Abdul Wahat NH, Md Daud MK, Wan Mohamad WN
    Sci Rep, 2023 Dec 21;13(1):22842.
    PMID: 38129442 DOI: 10.1038/s41598-023-48810-1
    There has been a growing interest in studying the usefulness of chirp stimuli in recording cervical vestibular evoked myogenic potential (cVEMP) waveforms. Nevertheless, the study outcomes are debatable and require verification. In view of this, the aim of the present study was to compare cVEMP results when elicited by 500 Hz tone burst and narrowband (NB) CE-Chirp stimuli in adults with sensorineural hearing loss (SNHL). Fifty adults with bilateral SNHL (aged 20-65 years) underwent the cVEMP testing based on the established protocol. The 500 Hz tone burst and NB CE-Chirp (centred at 500 Hz) stimuli were presented to each ear at an intensity level of 120.5 dB peSPL. P1 latency, N1 latency, and P1-N1 amplitude values were analysed accordingly. The NB CE-Chirp stimulus produced significantly shorter P1 and N1 latencies (p  0.80). In contrast, both stimuli elicited cVEMP responses with P1-N1 amplitude values that were not statistically different from one another (p = 0.157, d = 0.15). Additionally, age and hearing level were found to be significantly correlated (r = 0.56, p 
    Matched MeSH terms: Acoustic Stimulation/methods
  5. Wilson CA, Berger JI, de Boer J, Sereda M, Palmer AR, Hall DA, et al.
    Hear Res, 2019 03 15;374:13-23.
    PMID: 30685571 DOI: 10.1016/j.heares.2019.01.009
    A common method for measuring changes in temporal processing sensitivity in both humans and animals makes use of GaP-induced Inhibition of the Acoustic Startle (GPIAS). It is also the basis of a common method for detecting tinnitus in rodents. However, the link to tinnitus has not been properly established because GPIAS has not yet been used to objectively demonstrate tinnitus in humans. In guinea pigs, the Preyer (ear flick) myogenic reflex is an established method for measuring the acoustic startle for the GPIAS test, while in humans, it is the eye-blink reflex. Yet, humans have a vestigial remnant of the Preyer reflex, which can be detected by measuring skin surface potentials associated with the Post-Auricular Muscle Response (PAMR). A similar electrical potential can be measured in guinea pigs and we aimed to show that the PAMR could be used to demonstrate GPIAS in both species. In guinea pigs, we compare the GPIAS measured using the pinna movement of the Preyer reflex and the electrical potential of the PAMR to demonstrate that the two are at least equivalent. In humans, we establish for the first time that the PAMR provides a reliable way of measuring GPIAS that is a pure acoustic alternative to the multimodal eye-blink reflex. Further exploratory tests showed that while eye gaze position influenced the size of the PAMR response, it did not change the degree of GPIAS. Our findings confirm that the PAMR is a sensitive method for measuring GPIAS and suggest that it may allow direct comparison of temporal processing between humans and animals and may provide a basis for an objective test of tinnitus.
    Matched MeSH terms: Acoustic Stimulation
  6. Zakaria MN, Jalaei B, Wahab NA
    Eur Arch Otorhinolaryngol, 2016 Feb;273(2):349-54.
    PMID: 25682179 DOI: 10.1007/s00405-015-3555-3
    For estimating behavioral hearing thresholds, auditory steady state response (ASSR) can be reliably evoked by stimuli at low and high modulation frequencies (MFs). In this regard, little is known regarding ASSR thresholds evoked by stimuli at different MFs in female and male participants. In fact, recent data suggest that 40-Hz ASSR is influenced by estrogen level in females. Hence, the aim of the present study was to determine the effect of gender and MF on ASSR thresholds in young adults. Twenty-eight normally hearing participants (14 males and 14 females) were enrolled in this study. For each subject, ASSR thresholds were recorded with narrow-band chirps at 500, 1,000, 2,000, and 4,000 Hz carrier frequencies (CFs) and at 40 and 90 Hz MFs. Two-way mixed ANOVA (with gender and MF as the factors) revealed no significant interaction effect between factors at all CFs (p > 0.05). The gender effect was only significant at 500 Hz CF (p < 0.05). At 500 and 1,000 Hz CFs, mean ASSR thresholds were significantly lower at 40 Hz MF than at 90 Hz MF (p < 0.05). Interestingly, at 2,000 and 4,000 Hz CFs, mean ASSR thresholds were significantly lower at 90 Hz MF than at 40 Hz MF (p < 0.05). The lower ASSR thresholds in females might be due to hormonal influence. When recording ASSR thresholds at low MF, we suggest the use of gender-specific normative data so that more valid comparisons can be made, particularly at 500 Hz CF.
    Matched MeSH terms: Acoustic Stimulation/methods*
  7. Bakker MJ, van Dijk JG, Pramono A, Sutarni S, Tijssen MA
    Mov Disord, 2013 Mar;28(3):370-9.
    PMID: 23283702 DOI: 10.1002/mds.25280
    The nature of culture-specific startles syndromes such as "Latah" in Indonesia and Malaysia is ill understood. Hypotheses concerning their origin include sociocultural behavior, psychiatric disorders, and neurological syndromes. The various disorders show striking similarities despite occurring in diverse cultural settings and genetically distant populations. They are characterized clinically by exaggerated startle responses and involuntary vocalizations, echolalia, and echopraxia. Quantifying startle reflexes may help define Latah within the 3 groups of startle syndromes: (1) hyperekplexia, (2) startle-induced disorders, and (3) neuropsychiatric startle syndromes. Twelve female Latah patients (mean age, 44.6 years; SD, 7.7 years) and 12 age-, sex- and socioeconomically matched controls (mean age, 42.3 year; SD, 8.0) were studied using structured history taking and neurological examination including provocation of vocalizations, echolalia, and echopraxia. We quantified auditory startle reflexes with electromyographic activity of 6 left-sided muscles following 104-dB tones. We defined 2 phases for the startle response: a short latency motor startle reflex initiated in the lower brain stem <100/120 ms) and a later, second phase more influenced by psychological factors (the "orienting reflex," 100/120-1000 ms after the stimulus). Early as well as late motor startle responses were significantly increased in patients compared with controls (P ≤ .05). Following their startle response, Latah patients showed stereotyped responses including vocalizations and echo phenomena. Startle responses were increased, but clinically these proved insignificant compared with the stereotyped behavioral responses following the startle response. This study supports the classification of Latah as a "neuropsychiatric startle syndrome."
    Matched MeSH terms: Acoustic Stimulation
  8. Haider HF, Bojić T, Ribeiro SF, Paço J, Hall DA, Szczepek AJ
    Front Neurosci, 2018;12:866.
    PMID: 30538616 DOI: 10.3389/fnins.2018.00866
    Tinnitus is the conscious perception of a sound without a corresponding external acoustic stimulus, usually described as a phantom perception. One of the major challenges for tinnitus research is to understand the pathophysiological mechanisms triggering and maintaining the symptoms, especially for subjective chronic tinnitus. Our objective was to synthesize the published literature in order to provide a comprehensive update on theoretical and experimental advances and to identify further research and clinical directions. We performed literature searches in three electronic databases, complemented by scanning reference lists from relevant reviews in our included records, citation searching of the included articles using Web of Science, and manual searching of the last 6 months of principal otology journals. One-hundred and thirty-two records were included in the review and the information related to peripheral and central mechanisms of tinnitus pathophysiology was collected in order to update on theories and models. A narrative synthesis examined the main themes arising from this information. Tinnitus pathophysiology is complex and multifactorial, involving the auditory and non-auditory systems. Recent theories assume the necessary involvement of extra-auditory brain regions for tinnitus to reach consciousness. Tinnitus engages multiple active dynamic and overlapping networks. We conclude that advancing knowledge concerning the origin and maintenance of specific tinnitus subtypes origin and maintenance mechanisms is of paramount importance for identifying adequate treatment.
    Matched MeSH terms: Acoustic Stimulation
  9. Jalaei B, Zakaria MN, Mohd Azmi MH, Nik Othman NA, Sidek D
    Ann Otol Rhinol Laryngol, 2017 Apr;126(4):290-295.
    PMID: 28177264 DOI: 10.1177/0003489417690169
    OBJECTIVES: Gender disparities in speech-evoked auditory brainstem response (speech-ABR) outcomes have been reported, but the literature is limited. The present study was performed to further verify this issue and determine the influence of head size on speech-ABR results between genders.

    METHODS: Twenty-nine healthy Malaysian subjects (14 males and 15 females) aged 19 to 30 years participated in this study. After measuring the head circumference, speech-ABR was recorded by using synthesized syllable /da/ from the right ear of each participant. Speech-ABR peaks amplitudes, peaks latencies, and composite onset measures were computed and analyzed.

    RESULTS: Significant gender disparities were noted in the transient component but not in the sustained component of speech-ABR. Statistically higher V/A amplitudes and less steeper V/A slopes were found in females. These gender differences were partially affected after controlling for the head size.

    CONCLUSIONS: Head size is not the main contributing factor for gender disparities in speech-ABR outcomes. Gender-specific normative data can be useful when recording speech-ABR for clinical purposes.

    Matched MeSH terms: Acoustic Stimulation/methods
  10. Quar TK, Ching TY, Newall P, Sharma M
    Int J Audiol, 2013 May;52(5):322-32.
    PMID: 23570290 DOI: 10.3109/14992027.2012.755740
    The study aims to compare the performance of hearing aids fitted according to the NAL-NL1 and DSL v5 prescriptive procedure for children.
    Matched MeSH terms: Acoustic Stimulation
  11. Anandan ES, Husain R, Seluakumaran K
    Atten Percept Psychophys, 2021 May;83(4):1737-1751.
    PMID: 33389676 DOI: 10.3758/s13414-020-02210-z
    Signals containing attended frequencies are facilitated while those with unexpected frequencies are suppressed by an auditory filtering process. The neurocognitive mechanism underlying the auditory attentional filter is, however, poorly understood. The olivocochlear bundle (OCB), a brainstem neural circuit that is part of the efferent system, has been suggested to be partly responsible for the filtering via its noise-dependent antimasking effect. The current study examined the role of the OCB in attentional filtering, particularly the validity of the antimasking hypothesis, by comparing attentional filters measured in quiet and in the presence of background noise in a group of normal-hearing listeners. Filters obtained in both conditions were comparable, suggesting that the presence of background noise is not crucial for attentional filter generation. In addition, comparison of frequency-specific changes of the cue-evoked enhancement component of filters in quiet and noise also did not reveal any major contribution of background noise to the cue effect. These findings argue against the involvement of an antimasking effect in the attentional process. Instead of the antimasking effect mediated via medial olivocochlear fibers, results from current and earlier studies can be explained by frequency-specific modulation of afferent spontaneous activity by lateral olivocochlear fibers. It is proposed that the activity of these lateral fibers could be driven by top-down cortical control via a noise-independent mechanism. SIGNIFICANCE: The neural basis for auditory attentional filter remains a fundamental but poorly understood area in auditory neuroscience. The efferent olivocochlear pathway that projects from the brainstem back to the cochlea has been suggested to mediate the attentional effect via its noise-dependent antimasking effect. The current study demonstrates that the filter generation is mostly independent of the background noise, and therefore is unlikely to be mediated by the olivocochlear brainstem reflex. It is proposed that the entire cortico-olivocochlear system might instead be used to alter the hearing sensitivity during focus attention via frequency-specific modulation of afferent spontaneous activity.
    Matched MeSH terms: Acoustic Stimulation
  12. Sundagumaran H, Seethapathy J
    Int J Pediatr Otorhinolaryngol, 2020 Nov;138:110393.
    PMID: 33152983 DOI: 10.1016/j.ijporl.2020.110393
    BACKGROUND: Distortion product otoacoustic emissions (DPOAE) in infants with Iron Deficiency Anemia (IDA) helps in understanding the cochlear status especially the functioning of outer hair cells.

    OBJECTIVES: To analyze the presence of DPOAE across frequencies and DP amplitude in infants with and without IDA.

    METHOD: DPOAE were recorded on 40 infants with IDA and 40 infants without IDA in the age range of 6-24 months. Cubic DPOAEs (2f1-f2) were measured at six f2 frequencies (1500 Hz, 2000 Hz, 3000 Hz, 4500 Hz, 6000 Hz & 8000 Hz) with primary tone stimulus of intensity L1 equal to 65 dBSPL and L2 equal to 55 dBSPL. Immittance audiometry was performed using 226 Hz probe tone prior to DPOAE recording to ascertain normal middle ear functioning.

    RESULTS: DPOAEs were present in all infants with and without IDA across frequencies tested. DP amplitude across the frequencies did not show any statistically significant difference (p 

    Matched MeSH terms: Acoustic Stimulation
  13. Reeves A, Seluakumaran K, Scharf B
    J Acoust Soc Am, 2021 05;149(5):3352.
    PMID: 34241123 DOI: 10.1121/10.0004786
    A contralateral "cue" tone presented in continuous broadband noise both lowers the threshold of a signal tone by guiding attention to it and raises its threshold by interference. Here, signal tones were fixed in duration (40 ms, 52 ms with ramps), frequency (1500 Hz), timing, and level, so attention did not need guidance. Interference by contralateral cues was studied in relation to cue-signal proximity, cue-signal temporal overlap, and cue-signal order (cue after: backward interference, BI; or cue first: forward interference, FI). Cues, also ramped, were 12 dB above the signal level. Long cues (300 or 600 ms) raised thresholds by 5.3 dB when the signal and cue overlapped and by 5.1 dB in FI and 3.2 dB in BI when cues and signals were separated by 40 ms. Short cues (40 ms) raised thresholds by 4.5 dB in FI and 4.0 dB in BI for separations of 7 to 40 ms, but by ∼13 dB when simultaneous and in phase. FI and BI are comparable in magnitude and hardly increase when the signal is close in time to abrupt cue transients. These results do not support the notion that masking of the signal is due to the contralateral cue onset/offset transient response. Instead, sluggish attention or temporal integration may explain contralateral proximal interference.
    Matched MeSH terms: Acoustic Stimulation
  14. Yuvaraj R, Murugappan M, Ibrahim NM, Omar MI, Sundaraj K, Mohamad K, et al.
    J Integr Neurosci, 2014 Mar;13(1):89-120.
    PMID: 24738541 DOI: 10.1142/S021963521450006X
    Deficits in the ability to process emotions characterize several neuropsychiatric disorders and are traits of Parkinson's disease (PD), and there is need for a method of quantifying emotion, which is currently performed by clinical diagnosis. Electroencephalogram (EEG) signals, being an activity of central nervous system (CNS), can reflect the underlying true emotional state of a person. This study applied machine-learning algorithms to categorize EEG emotional states in PD patients that would classify six basic emotions (happiness and sadness, fear, anger, surprise and disgust) in comparison with healthy controls (HC). Emotional EEG data were recorded from 20 PD patients and 20 healthy age-, education level- and sex-matched controls using multimodal (audio-visual) stimuli. The use of nonlinear features motivated by the higher-order spectra (HOS) has been reported to be a promising approach to classify the emotional states. In this work, we made the comparative study of the performance of k-nearest neighbor (kNN) and support vector machine (SVM) classifiers using the features derived from HOS and from the power spectrum. Analysis of variance (ANOVA) showed that power spectrum and HOS based features were statistically significant among the six emotional states (p < 0.0001). Classification results shows that using the selected HOS based features instead of power spectrum based features provided comparatively better accuracy for all the six classes with an overall accuracy of 70.10% ± 2.83% and 77.29% ± 1.73% for PD patients and HC in beta (13-30 Hz) band using SVM classifier. Besides, PD patients achieved less accuracy in the processing of negative emotions (sadness, fear, anger and disgust) than in processing of positive emotions (happiness, surprise) compared with HC. These results demonstrate the effectiveness of applying machine learning techniques to the classification of emotional states in PD patients in a user independent manner using EEG signals. The accuracy of the system can be improved by investigating the other HOS based features. This study might lead to a practical system for noninvasive assessment of the emotional impairments associated with neurological disorders.
    Matched MeSH terms: Acoustic Stimulation
  15. Subha ST, Raman R
    Ear Nose Throat J, 2006 Oct;85(10):650, 652-3.
    PMID: 17124935
    We performed a study to determine if cerumen in the ear canal causes significant hearing loss and to ascertain if there is any correlation between the amount of cerumen and the degree of hearing loss. Our study was conducted on 109 ears in 80 patients. The results indicated that impacted cerumen does cause a significant degree of conductive hearing loss. We found no significant correlation between the length of the cerumen plug and the severity of hearing loss. Nor did we find any significant correlation between the presence of impacted cerumen and variables such as age, sex, ethnicity, or affected side.
    Matched MeSH terms: Acoustic Stimulation
  16. Dzulkarnain AAA, Abdullah SA, Ruzai MAM, Ibrahim SHMN, Anuar NFA, Rahim 'EA
    Am J Audiol, 2018 Sep 12;27(3):294-305.
    PMID: 30054628 DOI: 10.1044/2018_AJA-17-0087
    Purpose: The purpose of this study was to investigate the influence of 2 different electrode montages (ipsilateral and vertical) on the auditory brainstem response (ABR) findings elicited from narrow band (NB) level-specific (LS) CE-Chirp and tone-burst in subjects with normal hearing at several intensity levels and frequency combinations.

    Method: Quasi-experimental and repeated-measures study designs were used in this study. Twenty-six adults with normal hearing (17 females, 9 males) participated. ABRs were acquired from the study participants at 3 intensity levels (80, 60, and 40 dB nHL), 3 frequencies (500, 1000, and 2000 Hz), 2 electrode montages (ipsilateral and vertical), and 2 stimuli (NB LS CE-Chirp and tone-burst) using 2 stopping criteria (fixed averages at 4,000 sweeps and F test at multiple points = 3.1).

    Results: Wave V amplitudes were only 19%-26% larger for the vertical recordings than the ipsilateral recordings in both the ABRs obtained from the NB LS CE-Chirp and tone-burst stimuli. The mean differences in the F test at multiple points values and the residual noise levels between the ABRs obtained from the vertical and ipsilateral montages were statistically not significant. In addition, the ABR elicited from the NB LS CE-Chirp was significantly larger (up to 69%) than those from the tone-burst, except at the lower intensity level.

    Conclusion: Both the ipsilateral and vertical montages can be used to record ABR to the NB LS CE-Chirp because of the small enhancement in the wave V amplitude provided by the vertical montage.

    Matched MeSH terms: Acoustic Stimulation/methods*
  17. Amir Kassim A, Rehman R, Price JM
    Acta Psychol (Amst), 2018 Apr;185:72-80.
    PMID: 29407247 DOI: 10.1016/j.actpsy.2018.01.012
    Previous research has shown that auditory recognition memory is poorer compared to visual and cross-modal (visual and auditory) recognition memory. The effect of repetition on memory has been robust in showing improved performance. It is not clear, however, how auditory recognition memory compares to visual and cross-modal recognition memory following repetition. Participants performed a recognition memory task, making old/new discriminations to new stimuli, stimuli repeated for the first time after 4-7 intervening items (R1), or repeated for the second time after 36-39 intervening items (R2). Depending on the condition, participants were either exposed to visual stimuli (2D line drawings), auditory stimuli (spoken words), or cross-modal stimuli (pairs of images and associated spoken words). Results showed that unlike participants in the visual and cross-modal conditions, participants in the auditory recognition did not show improvements in performance on R2 trials compared to R1 trials. These findings have implications for pedagogical techniques in education, as well as for interventions and exercises aimed at boosting memory performance.
    Matched MeSH terms: Acoustic Stimulation/methods*
  18. Yuvaraj R, Murugappan M, Ibrahim NM, Sundaraj K, Omar MI, Mohamad K, et al.
    Int J Psychophysiol, 2014 Dec;94(3):482-95.
    PMID: 25109433 DOI: 10.1016/j.ijpsycho.2014.07.014
    In addition to classic motor signs and symptoms, individuals with Parkinson's disease (PD) are characterized by emotional deficits. Ongoing brain activity can be recorded by electroencephalograph (EEG) to discover the links between emotional states and brain activity. This study utilized machine-learning algorithms to categorize emotional states in PD patients compared with healthy controls (HC) using EEG. Twenty non-demented PD patients and 20 healthy age-, gender-, and education level-matched controls viewed happiness, sadness, fear, anger, surprise, and disgust emotional stimuli while fourteen-channel EEG was being recorded. Multimodal stimulus (combination of audio and visual) was used to evoke the emotions. To classify the EEG-based emotional states and visualize the changes of emotional states over time, this paper compares four kinds of EEG features for emotional state classification and proposes an approach to track the trajectory of emotion changes with manifold learning. From the experimental results using our EEG data set, we found that (a) bispectrum feature is superior to other three kinds of features, namely power spectrum, wavelet packet and nonlinear dynamical analysis; (b) higher frequency bands (alpha, beta and gamma) play a more important role in emotion activities than lower frequency bands (delta and theta) in both groups and; (c) the trajectory of emotion changes can be visualized by reducing subject-independent features with manifold learning. This provides a promising way of implementing visualization of patient's emotional state in real time and leads to a practical system for noninvasive assessment of the emotional impairments associated with neurological disorders.
    Matched MeSH terms: Acoustic Stimulation/methods*
  19. Rahmat S, O'Beirne GA
    Hear Res, 2015 Dec;330(Pt A):125-33.
    PMID: 26209881 DOI: 10.1016/j.heares.2015.07.013
    Schroeder-phase masking complexes have been used in many psychophysical experiments to examine the phase curvature of cochlear filtering at characteristic frequencies, and other aspects of cochlear nonlinearity. In a normal nonlinear cochlea, changing the "scalar factor" of the Schroeder-phase masker from -1 through 0 to +1 results in a marked difference in the measured masked thresholds, whereas this difference is reduced in ears with damaged outer hair cells. Despite the valuable information it may give, one disadvantage of the Schroeder-phase masking procedure is the length of the test - using the conventional three-alternative forced-choice technique to measure a masking function takes around 45 min for one combination of probe frequency and intensity. As an alternative, we have developed a fast method of recording these functions which uses a Békésy tracking procedure. Testing at 500 Hz in normal hearing participants, we demonstrate that our fast method: i) shows good agreement with the conventional method; ii) shows high test-retest reliability; and iii) shortens the testing time to 8 min.
    Matched MeSH terms: Acoustic Stimulation/methods
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links