Displaying publications 1 - 20 of 52 in total

Abstract:
Sort:
  1. Nashrah Maamor, Sitti Ladyia Salleh, Nurul Ain Abdullah
    MyJurnal
    The objective of this study was to investigate the degree to which Auditory Steady State Response (ASSR) thresholds correlate with behavioral thresholds in two groups of adult subjects, one with normal hearing and the other with sensorineural hearing impairment. When the relationship between ASSR and behavioral thresholds were analyzed separately according to different groups of subjects, significant correlations were only found for the hearing impaired group. The mean differences between the actual and the predicted thresholds derived from linear regression analysis for that group of subjects were found to be 5 dB (SD = 4), 3 dB (SD = 3), 4 dB (SD = 3) and 4 dB (SD = 4) with correlation coefficients of 0.80, 0.88, 0.91 and 0.97 for the 500, 1000, 2000 and 4000 Hz carrier frequencies, respectively. When the relationship between ASSR and behavioral thresholds were analyzed using data from both groups of subjects, correlation coefficients were found to be higher across carrier frequencies of 500 to 4000 Hz (r ³ 0.96) with mean differences between the actual and the predicted thresholds of 6 dB (SD = 3), 4 dB (SD = 3), 4 dB (SD = 3) and 6 dB (SD = 3) for the hearing impaired group and 11dB (SD = 7), 8 dB (SD = 8), 8 dB (SD = 6) and 10 dB (SD = 7) for the normal hearing group. However, it was observed that the range of differences between the actual and the predicted thresholds were quite large reaching 34 dB for the 500 and 4000 Hz carrier frequencies. This suggests that in clinical setting, ASSR cannot predict the presence or absence of a hearing loss accurately. In general, it can be concluded that ASSR allow for an accurate prediction of behavioral thresholds within ± 10 dB in subjects with hearing impairment. However, ASSR cannot accurately predict hearing thresholds in normally hearing individuals.
    Key words: auditory steady-state response threshold, behavioral threshold, adult, normal hearing, hearing impairment
    Matched MeSH terms: Acoustic Stimulation
  2. Hu S, Anschuetz L, Hall DA, Caversaccio M, Wimmer W
    Trends Hear, 2021 3 6;25:2331216520986303.
    PMID: 33663298 DOI: 10.1177/2331216520986303
    Residual inhibition, that is, the temporary suppression of tinnitus loudness after acoustic stimulation, is a frequently observed phenomenon that may have prognostic value for clinical applications. However, it is unclear in which subjects residual inhibition is more likely and how stable the effect of inhibition is over multiple repetitions. The primary aim of this work was to evaluate the effect of hearing loss and tinnitus chronicity on residual inhibition susceptibility. The secondary aim was to investigate the short-term repeatability of residual inhibition. Residual inhibition was assessed in 74 tinnitus subjects with 60-second narrow-band noise stimuli in 10 consecutive trials. The subjects were assigned to groups according to their depth of suppression (substantial residual inhibition vs. comparator group). In addition, a categorization in normal hearing and hearing loss groups, related to the degree of hearing loss at the frequency corresponding to the tinnitus pitch, was made. Logistic regression was used to identify factors associated with susceptibility to residual inhibition. Repeatability of residual inhibition was assessed using mixed-effects ordinal regression including poststimulus time and repetitions as factors. Tinnitus chronicity was not associated with residual inhibition for subjects with hearing loss, while a statistically significant negative association between tinnitus chronicity and residual inhibition susceptibility was observed in normal hearing subjects (odds ratio: 0.63; p = .0076). Moreover, repeated states of suppression can be stably induced, reinforcing the use of residual inhibition for within-subject comparison studies.
    Matched MeSH terms: Acoustic Stimulation
  3. Zakaria MN, Salim R, Abdul Wahat NH, Md Daud MK, Wan Mohamad WN
    Sci Rep, 2023 Dec 21;13(1):22842.
    PMID: 38129442 DOI: 10.1038/s41598-023-48810-1
    There has been a growing interest in studying the usefulness of chirp stimuli in recording cervical vestibular evoked myogenic potential (cVEMP) waveforms. Nevertheless, the study outcomes are debatable and require verification. In view of this, the aim of the present study was to compare cVEMP results when elicited by 500 Hz tone burst and narrowband (NB) CE-Chirp stimuli in adults with sensorineural hearing loss (SNHL). Fifty adults with bilateral SNHL (aged 20-65 years) underwent the cVEMP testing based on the established protocol. The 500 Hz tone burst and NB CE-Chirp (centred at 500 Hz) stimuli were presented to each ear at an intensity level of 120.5 dB peSPL. P1 latency, N1 latency, and P1-N1 amplitude values were analysed accordingly. The NB CE-Chirp stimulus produced significantly shorter P1 and N1 latencies (p  0.80). In contrast, both stimuli elicited cVEMP responses with P1-N1 amplitude values that were not statistically different from one another (p = 0.157, d = 0.15). Additionally, age and hearing level were found to be significantly correlated (r = 0.56, p 
    Matched MeSH terms: Acoustic Stimulation/methods
  4. Hamid K, Yusoff A, Rahman M, Mohamad M, Hamid A
    Biomed Imaging Interv J, 2012 Apr;8(2):e13.
    PMID: 22970069 MyJurnal DOI: 10.2349/biij.8.2.e13
    This fMRI study is about modelling the effective connectivity between Heschl's gyrus (HG) and the superior temporal gyrus (STG) in human primary auditory cortices. MATERIALS #ENTITYSTARTX00026;
    Matched MeSH terms: Acoustic Stimulation
  5. Haider HF, Bojić T, Ribeiro SF, Paço J, Hall DA, Szczepek AJ
    Front Neurosci, 2018;12:866.
    PMID: 30538616 DOI: 10.3389/fnins.2018.00866
    Tinnitus is the conscious perception of a sound without a corresponding external acoustic stimulus, usually described as a phantom perception. One of the major challenges for tinnitus research is to understand the pathophysiological mechanisms triggering and maintaining the symptoms, especially for subjective chronic tinnitus. Our objective was to synthesize the published literature in order to provide a comprehensive update on theoretical and experimental advances and to identify further research and clinical directions. We performed literature searches in three electronic databases, complemented by scanning reference lists from relevant reviews in our included records, citation searching of the included articles using Web of Science, and manual searching of the last 6 months of principal otology journals. One-hundred and thirty-two records were included in the review and the information related to peripheral and central mechanisms of tinnitus pathophysiology was collected in order to update on theories and models. A narrative synthesis examined the main themes arising from this information. Tinnitus pathophysiology is complex and multifactorial, involving the auditory and non-auditory systems. Recent theories assume the necessary involvement of extra-auditory brain regions for tinnitus to reach consciousness. Tinnitus engages multiple active dynamic and overlapping networks. We conclude that advancing knowledge concerning the origin and maintenance of specific tinnitus subtypes origin and maintenance mechanisms is of paramount importance for identifying adequate treatment.
    Matched MeSH terms: Acoustic Stimulation
  6. Dewey RS, Hall DA, Plack CJ, Francis ST
    Magn Reson Med, 2021 11;86(5):2577-2588.
    PMID: 34196020 DOI: 10.1002/mrm.28902
    PURPOSE: Detecting sound-related activity using functional MRI requires the auditory stimulus to be more salient than the intense background scanner acoustic noise. Various strategies can reduce the impact of scanner acoustic noise, including "sparse" temporal sampling with single/clustered acquisitions providing intervals without any background scanner acoustic noise, or active noise cancelation (ANC) during "continuous" temporal sampling, which generates an acoustic signal that adds destructively to the scanner acoustic noise, substantially reducing the acoustic energy at the participant's eardrum. Furthermore, multiband functional MRI allows multiple slices to be collected simultaneously, thereby reducing scanner acoustic noise in a given sampling period.

    METHODS: Isotropic multiband functional MRI (1.5 mm) with sparse sampling (effective TR = 9000 ms, acquisition duration = 1962 ms) and continuous sampling (TR = 2000 ms) with ANC were compared in 15 normally hearing participants. A sustained broadband noise stimulus was presented to drive activation of both sustained and transient auditory responses within subcortical and cortical auditory regions.

    RESULTS: Robust broadband noise-related activity was detected throughout the auditory pathways. Continuous sampling with ANC was found to give a statistically significant advantage over sparse sampling for the detection of the transient (onset) stimulus responses, particularly in the auditory cortex (P < .001) and inferior colliculus (P < .001), whereas gains provided by sparse over continuous ANC for detecting offset and sustained responses were marginal (p ~ 0.05 in superior olivary complex, inferior colliculus, medial geniculate body, and auditory cortex).

    CONCLUSIONS: Sparse and continuous ANC multiband functional MRI protocols provide differing advantages for observing the transient (onset and offset) and sustained stimulus responses.

    Matched MeSH terms: Acoustic Stimulation
  7. Palaniappan R, Phon-Amnuaisuk S, Eswaran C
    Int J Cardiol, 2015;190:262-3.
    PMID: 25932800 DOI: 10.1016/j.ijcard.2015.04.175
    Matched MeSH terms: Acoustic Stimulation/methods
  8. Mao D, Wunderlich J, Savkovic B, Jeffreys E, Nicholls N, Lee OW, et al.
    Sci Rep, 2021 12 14;11(1):24006.
    PMID: 34907273 DOI: 10.1038/s41598-021-03595-z
    Speech detection and discrimination ability are important measures of hearing ability that may inform crucial audiological intervention decisions for individuals with a hearing impairment. However, behavioral assessment of speech discrimination can be difficult and inaccurate in infants, prompting the need for an objective measure of speech detection and discrimination ability. In this study, the authors used functional near-infrared spectroscopy (fNIRS) as the objective measure. Twenty-three infants, 2 to 10 months of age participated, all of whom had passed newborn hearing screening or diagnostic audiology testing. They were presented with speech tokens at a comfortable listening level in a natural sleep state using a habituation/dishabituation paradigm. The authors hypothesized that fNIRS responses to speech token detection as well as speech token contrast discrimination could be measured in individual infants. The authors found significant fNIRS responses to speech detection in 87% of tested infants (false positive rate 0%), as well as to speech discrimination in 35% of tested infants (false positive rate 9%). The results show initial promise for the use of fNIRS as an objective clinical tool for measuring infant speech detection and discrimination ability; the authors highlight the further optimizations of test procedures and analysis techniques that would be required to improve accuracy and reliability to levels needed for clinical decision-making.
    Matched MeSH terms: Acoustic Stimulation*
  9. Hanafi SA, Zulkifli I, Ramiah SK, Chung ELT, Kamil R, Awad EA
    Poult Sci, 2023 Feb;102(2):102390.
    PMID: 36608455 DOI: 10.1016/j.psj.2022.102390
    Prenatal stress may evoke considerable physiological consequences on the developing poultry embryos and neonates. The present study aimed to determine prenatal auditory stimulation effects on serum levels of ceruloplasmin (CPN), alpha-1-acid glycoprotein (AGP), corticosterone (CORT), and heat shock protein 70 (Hsp70) regulations in developing chicken embryos and newly hatched chicks. Hatching eggs were subjected to the following auditory treatments; 1) control (no additional sound treatment other than the background sound of the incubator's compressors at 40 dB), 2) noise exposure (eggs were exposed to pre-recorded traffic noise at 90 dB) (NOISE), and 3) music exposure (eggs were exposed to Mozart's Sonata for Two Pianos in D Major, K 488 at 90 dB) (MUSIC). The NOISE and MUSIC treatments were for 20 min/h for 24 h (a total of 8 h/d), starting from embryonic days (ED) 12 to hatching. The MUSIC (1.37 ± 0.1 ng/mL) and NOISE (1.49 ± 0.2 ng/mL) treatments significantly elevated CPN at ED 15 compared to the Control (0.82 ± 0.04 ng/mL) group and post-hatch day 1 (Control, 1.86 ± 0.2 ng/mL; MUSIC, 2.84 ± 0.4 ng/mL; NOISE, 3.04 ± 0.3 ng/mL), AGP at ED 15 (Control, 39.1 ± 7.1 mg/mL; MUSIC, 85.5 ± 12.9 mg/mL; NOISE, 85.4 ± 15.1 mg/mL) and post-hatch day 1 (Control, 20.4 ± 2.2 mg/mL; MUSIC, 30.5 ± 4.7 mg/mL; NOISE, 30.3 ± 1.4 mg/mL). CORT significantly increased at ED 15 in both MUSIC (9.024 ± 1.4 ng/mL) and NOISE (12.15 ± 1.6 ng/mL) compared to the Control (4.39 ± 0.7 ng/mL) group. On the other hand, MUSIC exposed embryos had significantly higher Hsp70 expression than their Control and NOISE counterparts at ED 18 (Control, 12.9 ± 1.2 ng/mL; MUSIC, 129.6 ± 26.4 ng/mL; NOISE, 13.3 ± 2.3 ng/mL) and post-hatch day 1 (Control, 15.2 ± 1.7 ng/mL; MUSIC, 195.5 ± 68.5 ng/mL; NOISE, 13.2 ± 2.7 ng/mL). In conclusion, developing chicken embryos respond to auditory stimulation by altering CPN, AGP, CORT, and Hsp70. The alterations of these analytes could be important in developing embryos and newly hatched chicks to cope with stress attributed to auditory stimulation.
    Matched MeSH terms: Acoustic Stimulation/veterinary
  10. Chong FY, Jenstad LM
    Med J Malaysia, 2018 12;73(6):365-370.
    PMID: 30647205
    INTRODUCTION: Modulation-based noise reduction (MBNR) is one of the common noise reduction methods used in hearing aids. Gain reduction in high frequency bands may occur for some implementations of MBNR and fricatives might be susceptible to alteration, given the high frequency components in fricative noise. The main objective of this study is to quantify the acoustic effect of MBNR on /s, z/.

    METHODS: Speech-and-noise signals were presented to, and recorded from, six hearing aids mounted on a head and torso simulator. Test stimuli were nonsense words mixed with pink, cafeteria, or speech-modulated noise at 0 dB SNR. Fricatives /s, z/ were extracted from the recordings for analysis.

    RESULTS: Analysis of the noise confirmed that MBNR in all hearing aids was activated for the recordings. More than 1.0 dB of acoustic change occurred to /s, z/ when MBNR was turned on in four out of the six hearing aids in the pink and cafeteria noise conditions. The acoustics of /s, z/ by female talkers were affected more than male talkers. Significant relationships between amount of noise reduction and acoustic change of /s, z/ were found. Amount of noise reduction accounts for 42.8% and 16.8% of the variability in acoustic change for /s/ and /z/ respectively.

    CONCLUSION: Some clinically-available implementations of MBNR have measurable effects on the acoustics of fricatives. Possible implications for speech perception are discussed.

    Matched MeSH terms: Acoustic Stimulation
  11. Anandan ES, Husain R, Seluakumaran K
    Atten Percept Psychophys, 2021 May;83(4):1737-1751.
    PMID: 33389676 DOI: 10.3758/s13414-020-02210-z
    Signals containing attended frequencies are facilitated while those with unexpected frequencies are suppressed by an auditory filtering process. The neurocognitive mechanism underlying the auditory attentional filter is, however, poorly understood. The olivocochlear bundle (OCB), a brainstem neural circuit that is part of the efferent system, has been suggested to be partly responsible for the filtering via its noise-dependent antimasking effect. The current study examined the role of the OCB in attentional filtering, particularly the validity of the antimasking hypothesis, by comparing attentional filters measured in quiet and in the presence of background noise in a group of normal-hearing listeners. Filters obtained in both conditions were comparable, suggesting that the presence of background noise is not crucial for attentional filter generation. In addition, comparison of frequency-specific changes of the cue-evoked enhancement component of filters in quiet and noise also did not reveal any major contribution of background noise to the cue effect. These findings argue against the involvement of an antimasking effect in the attentional process. Instead of the antimasking effect mediated via medial olivocochlear fibers, results from current and earlier studies can be explained by frequency-specific modulation of afferent spontaneous activity by lateral olivocochlear fibers. It is proposed that the activity of these lateral fibers could be driven by top-down cortical control via a noise-independent mechanism. SIGNIFICANCE: The neural basis for auditory attentional filter remains a fundamental but poorly understood area in auditory neuroscience. The efferent olivocochlear pathway that projects from the brainstem back to the cochlea has been suggested to mediate the attentional effect via its noise-dependent antimasking effect. The current study demonstrates that the filter generation is mostly independent of the background noise, and therefore is unlikely to be mediated by the olivocochlear brainstem reflex. It is proposed that the entire cortico-olivocochlear system might instead be used to alter the hearing sensitivity during focus attention via frequency-specific modulation of afferent spontaneous activity.
    Matched MeSH terms: Acoustic Stimulation
  12. Reeves A, Seluakumaran K, Scharf B
    J Acoust Soc Am, 2021 05;149(5):3352.
    PMID: 34241123 DOI: 10.1121/10.0004786
    A contralateral "cue" tone presented in continuous broadband noise both lowers the threshold of a signal tone by guiding attention to it and raises its threshold by interference. Here, signal tones were fixed in duration (40 ms, 52 ms with ramps), frequency (1500 Hz), timing, and level, so attention did not need guidance. Interference by contralateral cues was studied in relation to cue-signal proximity, cue-signal temporal overlap, and cue-signal order (cue after: backward interference, BI; or cue first: forward interference, FI). Cues, also ramped, were 12 dB above the signal level. Long cues (300 or 600 ms) raised thresholds by 5.3 dB when the signal and cue overlapped and by 5.1 dB in FI and 3.2 dB in BI when cues and signals were separated by 40 ms. Short cues (40 ms) raised thresholds by 4.5 dB in FI and 4.0 dB in BI for separations of 7 to 40 ms, but by ∼13 dB when simultaneous and in phase. FI and BI are comparable in magnitude and hardly increase when the signal is close in time to abrupt cue transients. These results do not support the notion that masking of the signal is due to the contralateral cue onset/offset transient response. Instead, sluggish attention or temporal integration may explain contralateral proximal interference.
    Matched MeSH terms: Acoustic Stimulation
  13. Sundagumaran H, Seethapathy J
    Int J Pediatr Otorhinolaryngol, 2020 Nov;138:110393.
    PMID: 33152983 DOI: 10.1016/j.ijporl.2020.110393
    BACKGROUND: Distortion product otoacoustic emissions (DPOAE) in infants with Iron Deficiency Anemia (IDA) helps in understanding the cochlear status especially the functioning of outer hair cells.

    OBJECTIVES: To analyze the presence of DPOAE across frequencies and DP amplitude in infants with and without IDA.

    METHOD: DPOAE were recorded on 40 infants with IDA and 40 infants without IDA in the age range of 6-24 months. Cubic DPOAEs (2f1-f2) were measured at six f2 frequencies (1500 Hz, 2000 Hz, 3000 Hz, 4500 Hz, 6000 Hz & 8000 Hz) with primary tone stimulus of intensity L1 equal to 65 dBSPL and L2 equal to 55 dBSPL. Immittance audiometry was performed using 226 Hz probe tone prior to DPOAE recording to ascertain normal middle ear functioning.

    RESULTS: DPOAEs were present in all infants with and without IDA across frequencies tested. DP amplitude across the frequencies did not show any statistically significant difference (p 

    Matched MeSH terms: Acoustic Stimulation
  14. Zakaria MN, Abdul Wahab NA, Awang MA
    Noise Health, 2017 12 2;19(87):112-113.
    PMID: 29192621 DOI: 10.4103/nah.NAH_2_17
    Matched MeSH terms: Acoustic Stimulation
  15. Dzulkarnain AAA, Shahrudin FA, Jamal FN, Marzuki MN, Mazlan MNS
    Am J Audiol, 2020 Dec 09;29(4):838-850.
    PMID: 32966099 DOI: 10.1044/2020_AJA-20-00049
    Purpose The purpose of this study is to investigate the influence of stimulus repetition rates on the auditory brainstem response (ABR) to Level-Specific (LS) CE-Chirp and click stimuli at multiple intensity levels in normal-hearing adults. Method A repeated-measure study design was used on 13 normal-hearing adults. ABRs were acquired from the study participants using LS CE-Chirp and click stimuli at four stimulus repetition rates (19.1, 33.3, 61.1, and 81.1 Hz) and four intensity levels (80, 60, 40, and 20 dB nHL). The ABR test was stopped at 40-nV residual noise level. Results High-stimulus repetition rates caused the ABR latencies to be longer and have reduced amplitudes in both ABR to LS CE-Chirp and click stimuli. The ABR to LS CE-Chirp Wave I, III, and V amplitudes were larger than ABR to click in almost all the stimulus repetition rates. However, there were no differences in the number of averages required to reach the stopping criterion between ABR to LS CE-Chirp and click stimulus, and between high-stimulus repetition rates and low-stimulus repetition rates. Conclusion The LS CE-Chirp at standard low-stimulus repetition rates can be used to elicit ABR for both neurodiagnostic and threshold seeking procedure.
    Matched MeSH terms: Acoustic Stimulation
  16. Conlon B, Hamilton C, Meade E, Leong SL, O Connor C, Langguth B, et al.
    Sci Rep, 2022 Jun 30;12(1):10845.
    PMID: 35773272 DOI: 10.1038/s41598-022-13875-x
    More than 10% of the population suffers from tinnitus, which is a phantom auditory condition that is coded within the brain. A new neuromodulation approach to treat tinnitus has emerged that combines sound with electrical stimulation of somatosensory pathways, supported by multiple animal studies demonstrating that bimodal stimulation can elicit extensive neural plasticity within the auditory brain. More recently, in a large-scale clinical trial, bimodal neuromodulation combining sound and tongue stimulation drove significant reductions in tinnitus symptom severity during the first 6 weeks of treatment, followed by diminishing improvements during the second 6 weeks of treatment. The primary objective of the large-scale randomized and double-blinded study presented in this paper was to determine if background wideband noise as used in the previous clinical trial was necessary for bimodal treatment efficacy. An additional objective was to determine if adjusting the parameter settings after 6 weeks of treatment could overcome treatment habituation effects observed in the previous study. The primary endpoint at 6-weeks involved within-arm and between-arm comparisons for two treatment arms with different bimodal neuromodulation settings based on two widely used and validated outcome instruments, Tinnitus Handicap Inventory and Tinnitus Functional Index. Both treatment arms exhibited a statistically significant reduction in tinnitus symptoms during the first 6-weeks, which was further reduced significantly during the second 6-weeks by changing the parameter settings (Cohen's d effect size for full treatment period per arm and outcome measure ranged from - 0.7 to - 1.4). There were no significant differences between arms, in which tongue stimulation combined with only pure tones and without background wideband noise was sufficient to reduce tinnitus symptoms. These therapeutic effects were sustained up to 12 months after the treatment ended. The study included two additional exploratory arms, including one arm that presented only sound stimuli during the first 6 weeks of treatment and bimodal stimulation in the second 6 weeks of treatment. This arm revealed the criticality of combining tongue stimulation with sound for treatment efficacy. Overall, there were no treatment-related serious adverse events and a high compliance rate (83.8%) with 70.3% of participants indicating benefit. The discovery that adjusting stimulation parameters overcomes previously observed treatment habituation can be used to drive greater therapeutic effects and opens up new opportunities for optimizing stimuli and enhancing clinical outcomes for tinnitus patients with bimodal neuromodulation.
    Matched MeSH terms: Acoustic Stimulation
  17. Zakaria MN, Jalaei B, Wahab NA
    Eur Arch Otorhinolaryngol, 2016 Feb;273(2):349-54.
    PMID: 25682179 DOI: 10.1007/s00405-015-3555-3
    For estimating behavioral hearing thresholds, auditory steady state response (ASSR) can be reliably evoked by stimuli at low and high modulation frequencies (MFs). In this regard, little is known regarding ASSR thresholds evoked by stimuli at different MFs in female and male participants. In fact, recent data suggest that 40-Hz ASSR is influenced by estrogen level in females. Hence, the aim of the present study was to determine the effect of gender and MF on ASSR thresholds in young adults. Twenty-eight normally hearing participants (14 males and 14 females) were enrolled in this study. For each subject, ASSR thresholds were recorded with narrow-band chirps at 500, 1,000, 2,000, and 4,000 Hz carrier frequencies (CFs) and at 40 and 90 Hz MFs. Two-way mixed ANOVA (with gender and MF as the factors) revealed no significant interaction effect between factors at all CFs (p > 0.05). The gender effect was only significant at 500 Hz CF (p < 0.05). At 500 and 1,000 Hz CFs, mean ASSR thresholds were significantly lower at 40 Hz MF than at 90 Hz MF (p < 0.05). Interestingly, at 2,000 and 4,000 Hz CFs, mean ASSR thresholds were significantly lower at 90 Hz MF than at 40 Hz MF (p < 0.05). The lower ASSR thresholds in females might be due to hormonal influence. When recording ASSR thresholds at low MF, we suggest the use of gender-specific normative data so that more valid comparisons can be made, particularly at 500 Hz CF.
    Matched MeSH terms: Acoustic Stimulation/methods*
  18. Yuvaraj R, Murugappan M, Ibrahim NM, Sundaraj K, Omar MI, Mohamad K, et al.
    Int J Psychophysiol, 2014 Dec;94(3):482-95.
    PMID: 25109433 DOI: 10.1016/j.ijpsycho.2014.07.014
    In addition to classic motor signs and symptoms, individuals with Parkinson's disease (PD) are characterized by emotional deficits. Ongoing brain activity can be recorded by electroencephalograph (EEG) to discover the links between emotional states and brain activity. This study utilized machine-learning algorithms to categorize emotional states in PD patients compared with healthy controls (HC) using EEG. Twenty non-demented PD patients and 20 healthy age-, gender-, and education level-matched controls viewed happiness, sadness, fear, anger, surprise, and disgust emotional stimuli while fourteen-channel EEG was being recorded. Multimodal stimulus (combination of audio and visual) was used to evoke the emotions. To classify the EEG-based emotional states and visualize the changes of emotional states over time, this paper compares four kinds of EEG features for emotional state classification and proposes an approach to track the trajectory of emotion changes with manifold learning. From the experimental results using our EEG data set, we found that (a) bispectrum feature is superior to other three kinds of features, namely power spectrum, wavelet packet and nonlinear dynamical analysis; (b) higher frequency bands (alpha, beta and gamma) play a more important role in emotion activities than lower frequency bands (delta and theta) in both groups and; (c) the trajectory of emotion changes can be visualized by reducing subject-independent features with manifold learning. This provides a promising way of implementing visualization of patient's emotional state in real time and leads to a practical system for noninvasive assessment of the emotional impairments associated with neurological disorders.
    Matched MeSH terms: Acoustic Stimulation/methods*
  19. Khairi MD, Din S, Shahid H, Normastura AR
    J Laryngol Otol, 2005 Sep;119(9):678-83.
    PMID: 16156907
    The objective of this prospective study was to report on the prevalence of hearing impairment in the neonatal unit population. From 15 February 2000 to 15 March 2000 and from 15 February 2001 to 15 May 2001, 401 neonates were screened using transient evoked otoacoustic emissions (TEOAE) followed by second-stage screening of those infants who failed the initial test. Eight (2 per cent) infants failed one ear and 23 (5.74 per cent) infants failed both ears, adding up to 7.74 per cent planned for second-stage screening. Five out of 22 infants who came for the follow up failed the screening, resulting in a prevalence of hearing impairment of 1 per cent (95 per cent confidence interval [95% CI]: 0.0-2.0). Craniofacial malformations, very low birth weight, ototoxic medication, stigmata/syndromes associated with hearing loss and hyperbilirubinaemia at the level of exchange tranfusion were identified to be independent significant risk factors for hearing impairment, while poor Apgar scores and mechanical ventilation of more than five days were not. In conclusion, hearing screening in high-risk neonates revealed a total of 1 per cent with hearing loss. The changes in the risk profile indicate improved perinatal handling in a neonatal population at risk for hearing disorders.
    Matched MeSH terms: Acoustic Stimulation/methods
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links