Displaying all 5 publications

Abstract:
Sort:
  1. Phoon HS, Abdullah AC, Lee LW, Murugaiah P
    Clin Linguist Phon, 2014 May;28(5):329-45.
    PMID: 24446796 DOI: 10.3109/02699206.2013.868517
    To date, there has been little research done on phonological acquisition in the Malay language of typically developing Malay-speaking children. This study serves to fill this gap by providing a systematic description of Malay consonant acquisition in a large cohort of preschool-aged children between 4- and 6-years-old. In the study, 326 Malay-dominant speaking children were assessed using a picture naming task that elicited 53 single words containing all the primary consonants in Malay. Two main analyses were conducted to study their consonant acquisition: (1) age of customary and mastery production of consonants; and (2) consonant accuracy. Results revealed that Malay children acquired all the syllable-initial and syllable-final consonants before 4;06-years-old, with the exception of syllable-final /s/, /h/ and /l/ which were acquired after 5;06-years-old. The development of Malay consonants increased gradually from 4- to 6 years old, with female children performing better than male children. The accuracy of consonants based on manner of articulation showed that glides, affricates, nasals, and stops were higher than fricatives and liquids. In general, syllable-initial consonants were more accurate than syllable-final consonants while consonants in monosyllabic and disyllabic words were more accurate than polysyllabic words. These findings will provide significant information for speech-language pathologists for assessing Malay-speaking children and designing treatment objectives that reflect the course of phonological development in Malay.
    Matched MeSH terms: Speech Production Measurement/methods*
  2. Lutfi SL, Fernández-Martínez F, Lorenzo-Trueba J, Barra-Chicote R, Montero JM
    Sensors (Basel), 2013;13(8):10519-38.
    PMID: 23945740 DOI: 10.3390/s130810519
    We describe the work on infusion of emotion into a limited-task autonomous spoken conversational agent situated in the domestic environment, using a need-inspired task-independent emotion model (NEMO). In order to demonstrate the generation of affect through the use of the model, we describe the work of integrating it with a natural-language mixed-initiative HiFi-control spoken conversational agent (SCA). NEMO and the host system communicate externally, removing the need for the Dialog Manager to be modified, as is done in most existing dialog systems, in order to be adaptive. The first part of the paper concerns the integration between NEMO and the host agent. The second part summarizes the work on automatic affect prediction, namely, frustration and contentment, from dialog features, a non-conventional source, in the attempt of moving towards a more user-centric approach. The final part reports the evaluation results obtained from a user study, in which both versions of the agent (non-adaptive and emotionally-adaptive) were compared. The results provide substantial evidences with respect to the benefits of adding emotion in a spoken conversational agent, especially in mitigating users' frustrations and, ultimately, improving their satisfaction.
    Matched MeSH terms: Speech Production Measurement/methods*
  3. Vong E, Wilson L, Lincoln M
    J Fluency Disord, 2016 09;49:29-39.
    PMID: 27638190 DOI: 10.1016/j.jfludis.2016.07.003
    PURPOSE: This study investigated the outcomes of implementing the Lidcombe Program, an evidence-based early intervention for stuttering, with four preschool children in Malaysia. Early stuttering intervention is currently underdeveloped in Malaysia, where stuttering treatment is often more assertion-based than evidence-based. Therefore, introducing an evidence-based early stuttering intervention is an important milestone for Malaysian preschoolers who stutter.

    METHOD: The participants ranged from 3 years 3 months to 4 years 9 months at the start of the study. Beyond-clinic speech samples were obtained at 1 month and 1 week pretreatment and immediately post-Stage 1, and at 1 month, 3 months, 6 months and 12 months post-Stage 1.

    RESULTS: Two participants, who were bilingual, achieved near-zero levels of stuttering at 12 months posttreatment. Near zero levels of stuttering were also present in their untreated languages. One participant withdrew due to reasons not connected with the research or treatment. The remaining participant, who presented with severe stuttering, completed Stage 1 but had some relapse in Stage 2 and demonstrated mild stuttering 12 months post-Stage 1.

    CONCLUSIONS: The outcomes were achieved without the need to significantly adapt Lidcombe Program procedures to Malaysian culture. Further research to continue evaluation of the Lidcombe Program with Malaysian families and to estimate proportion of those who will respond is warranted.

    Matched MeSH terms: Speech Production Measurement/methods
  4. Ali Z, Alsulaiman M, Muhammad G, Elamvazuthi I, Al-Nasheri A, Mesallam TA, et al.
    J Voice, 2017 May;31(3):386.e1-386.e8.
    PMID: 27745756 DOI: 10.1016/j.jvoice.2016.09.009
    A large population around the world has voice complications. Various approaches for subjective and objective evaluations have been suggested in the literature. The subjective approach strongly depends on the experience and area of expertise of a clinician, and human error cannot be neglected. On the other hand, the objective or automatic approach is noninvasive. Automatic developed systems can provide complementary information that may be helpful for a clinician in the early screening of a voice disorder. At the same time, automatic systems can be deployed in remote areas where a general practitioner can use them and may refer the patient to a specialist to avoid complications that may be life threatening. Many automatic systems for disorder detection have been developed by applying different types of conventional speech features such as the linear prediction coefficients, linear prediction cepstral coefficients, and Mel-frequency cepstral coefficients (MFCCs). This study aims to ascertain whether conventional speech features detect voice pathology reliably, and whether they can be correlated with voice quality. To investigate this, an automatic detection system based on MFCC was developed, and three different voice disorder databases were used in this study. The experimental results suggest that the accuracy of the MFCC-based system varies from database to database. The detection rate for the intra-database ranges from 72% to 95%, and that for the inter-database is from 47% to 82%. The results conclude that conventional speech features are not correlated with voice, and hence are not reliable in pathology detection.
    Matched MeSH terms: Speech Production Measurement/methods*
  5. Ali Z, Elamvazuthi I, Alsulaiman M, Muhammad G
    J Voice, 2016 Nov;30(6):757.e7-757.e19.
    PMID: 26522263 DOI: 10.1016/j.jvoice.2015.08.010
    BACKGROUND AND OBJECTIVE: Automatic voice pathology detection using sustained vowels has been widely explored. Because of the stationary nature of the speech waveform, pathology detection with a sustained vowel is a comparatively easier task than that using a running speech. Some disorder detection systems with running speech have also been developed, although most of them are based on a voice activity detection (VAD), that is, itself a challenging task. Pathology detection with running speech needs more investigation, and systems with good accuracy (ACC) are required. Furthermore, pathology classification systems with running speech have not received any attention from the research community. In this article, automatic pathology detection and classification systems are developed using text-dependent running speech without adding a VAD module.

    METHOD: A set of three psychophysics conditions of hearing (critical band spectral estimation, equal loudness hearing curve, and the intensity loudness power law of hearing) is used to estimate the auditory spectrum. The auditory spectrum and all-pole models of the auditory spectrums are computed and analyzed and used in a Gaussian mixture model for an automatic decision.

    RESULTS: In the experiments using the Massachusetts Eye & Ear Infirmary database, an ACC of 99.56% is obtained for pathology detection, and an ACC of 93.33% is obtained for the pathology classification system. The results of the proposed systems outperform the existing running-speech-based systems.

    DISCUSSION: The developed system can effectively be used in voice pathology detection and classification systems, and the proposed features can visually differentiate between normal and pathological samples.

    Matched MeSH terms: Speech Production Measurement/methods*
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links