Displaying all 2 publications

Abstract:
Sort:
  1. Sovey S, Osman K, Matore MEEM
    Front Psychiatry, 2022;13:1022304.
    PMID: 36506434 DOI: 10.3389/fpsyt.2022.1022304
    Computational thinking refers to the cognitive processes underpinning the application of computer science concepts and methodologies to the methodical approach and creation of a solution to a problem. The study aims to determine how students' cognitive, affective, and conative dispositions in using computational thinking are influenced by a gender. This study used a survey research design with quantitative approach. Five hundred thirty-five secondary school students were sampled using probability sampling with the Computational Thinking Disposition Instrument (CTDI). WINSTEPS version 3.71.0 software was subsequently employed to assess the Gender Differential item functioning (GDIF) including reliability and validity with descriptive statistics were employed to assess students' disposition toward practicing computational thinking. In addition to providing implications for the theory, the data give verifiable research that the CT disposition profile consists of three constructs. In addition, the demonstrated CTDI has good GDIF features, which may be employed to evaluate the efficacy of the application of CT in the Malaysian curriculum by measuring the level of CT in terms of the disposition profile of students.
  2. Baharudin H, Maskor ZM, Matore MEEM
    Front Psychol, 2022;13:988272.
    PMID: 36591072 DOI: 10.3389/fpsyg.2022.988272
    Writing assessment relies closely on scoring the excellence of a subject's thoughts. This creates a faceted measurement structure regarding rubrics, tasks, and raters. Nevertheless, most studies did not consider the differences among raters systematically. This study examines the raters' differences in association with the reliability and validity of writing rubrics using the Many-Facet Rasch measurement model (MFRM) to model these differences. A set of standards for evaluating the quality of rating based on writing assessment was examined. Rating quality was tested within four writing domains from an analytic rubric using a scale of one to three. The writing domains explored were vocabulary, grammar, language, use, and organization; whereas the data were obtained from 15 Arabic essays gathered from religious secondary school students under the supervision of the Malaysia Ministry of Education. Five raters in the field of practice were selected to evaluate all the essays. As a result, (a) raters range considerably on the lenient-severity dimension, so rater variations ought to be modeled; (b) the combination of findings between raters avoids the doubt of scores, thereby reducing the measurement error which could lower the criterion validity with the external variable; and (c) MFRM adjustments effectively increased the correlations of the scores obtained from partial and full data. Predominant findings revealed that rating quality varies across analytic rubric domains. This also depicts that MFRM is an effective way to model rater differences and evaluate the validity and reliability of writing rubrics.
Related Terms
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links