Displaying all 10 publications

Abstract:
Sort:
  1. Shahid Hassan
    Education in Medicine Journal, 2012;4(1):115-128.
    MyJurnal
    The impact of good assessment in medical education depends on how appropriately the tools measure the clinical performance and how reliable, valid and feasible they are to achieve the logical decision. The traditional methods of clinical examination using long and short cases and orals are often argued for its subjectivity, low reliability and inadequate context specificity. Oral test though comparatively more valid due to its face-to-face questions are considered less reliable for problems of unstandardized questions, inconsistent marking and lack of sufficient testing time. Development of an “objective structured clinical examination” (OSCE) was sought as a solution to these problems. But the fragmented representation of the context in a number of stations in OSCE makes it less authentic for an integrated judgment of performance. Yet another method to thought of, was the workplace-based assessment (WPBA) but it takes a snapshot as a predefined attribute of a more complex integrated assessment such as long case. However due to the problem of feasibility it is less likely that high stakes examination as summative assessment will ever be able to attain workplace-based assessment such as Mini-CEX and DOPS. A TOACS (task oriented assessment of clinical skills) format currently used in high stakes fellowship examination in one of the center and claimed to have more active role for examiners was analyzed and compared with OSCE. Author however, did not find a difference except the difference of acronyms of the two formats. Both have multiple, fragmented static or interactive stations of 5-10 minutes duration with or without examiners, patients or exhibits and a marking scheme comprising of checklist or global rating. In the backdrop of this context a new assessment format named the ‘task integrated objective structured clinical examination” or TIOSCE modified from OSCE is currently developed in School of Medical Sciences (SMS) at USM. However, it is a different version of OSCE in which though the principle concept is the same as that of an OSCE, the continuum of clinical skill’s work up of the same patient’s is followed through to test multiple short attributes of clinical competences. As it retains most of the favorable features, TIOSCE also addresses some of the odds features of OSCE.
  2. Shahid Hassan
    MyJurnal
    Background: Competence-based curriculum has become the need of medical education to meet the objectives of institutions aiming to produce skilled physicians. To achieve the optimal competence and performance of graduates a number of traditional evaluation exercises have been practiced. Some of these e.g. OSCE although meet the acceptable standard of reliability and validity is the assessment done in a controlled environment. This leaves the room for performance-based assessment in real clinical situation such as mini clinical evaluation exercise (Mini-CEX). To practice and meet the challenges of Mini-CEX it is vital to undertake faculty development program with a comprehensively chalked down Mini-CEX protocol and its objectives to achieve the intended outcome. Objective: To undertake faculty development on Mini-CEX for its feasibility and acceptability as a method of formative assessment to evaluate the clinical competence of trainees in postgraduate program of Otolaryngology and Head-Neck Surgery. Method: 25 trainees from the four classes of master of surgery program of 2009 in Otolaryngology and Head-Neck Surgery (ORL-HNS) undertook Mini-CEX encounters and assessed by 9 supervisors in a 12-week period of study. Faculty development program was carried out through prior lectures deliberating on background, concept and procedure of Mini-CEX followed by demonstrations using video clip of Mini-CEX encounter recorded in own clinical environment. Students were also exposed to similar settings to take up the Mini-CEX encounter without any hesitation. Trainees were assessed in outpatient clinical setting. Program was evaluated for its feasibility and acceptability with respect to patient’s factors, clinical attributes, supervisor and trainee’s performance and their reported level of satisfaction.
    Result: Faculty development and trainees orientation in Min-CEX was achieved as feasible and acceptable. Higher rating of satisfaction was reported by majority assessors and trainees as they found Mini-CEX acceptable for formative assessment. Among clinical skills highest rating was received in physical examination and lowest rating in therapeutic skills. Conclusion: A motivated faculty and organized approach towards a comprehensive knowledge on Mini-CEX for its background communication, demonstration of procedure and method to complete the rating forms is the useful guide to adopt Mini-CEX. The faculty and trainees in department of ORL-HNS found Mini-CEX as feasible and acceptable assessment tool to monitor educational activity of postgraduate program through performance-based evaluation in a real clinical situation.
  3. Shahid Hassan
    MyJurnal
    Background: In order to achieve the desired performance of graduates a number of traditional evaluation exercises have been practiced to assess their competence as medical students. Many of these assessments are done in a controlled environment and mostly reflect on tests of competence than performance. Mini-CEX or direct observed procedural skills (DOPS) are the real performance-based assessment of clinical skills. Increased opportunity for observation and just-in-time feedback from the role model superiors produce a positive educational impact on students learning. This also provides trainees with formative assessment to monitor their learning objectives. However, to implement assessment strategies with Mini-CEX or DOPS needs to develop institution’s clear policy for a different teaching and learning culture of workplace based assessment. It also needs to develop user friendly rating form, checklist, elaboration of clinical competence and its attributes and procedural guidelines for practice. A precise role of these tools in the assessment of postgraduate program must be established before practicing them to evaluate and monitor trainee’s progress.
    Objective: To determine DOPS for its acceptability and feasibility as a method of formative assessment of clinical skills in postgraduate program of Otolaryngology and Head-Neck Surgery.
    Method: A total of 25 trainees were assessed for DOPS by 8 supervisors in this 12-weeks pilot study. A faculty development program for faculty members and trainees was run for DOPS. Trainees were advised to undertake at least one DOPS encounter out of 42 shortlisted procedures. Assessors were asked to mark trainees by completing a rating form using a checklist developed for each procedure. Trainees and assessors were asked to endorse their opinion on feasibility and acceptance of DOPS for practice of formative assessment in future. Data was analyzed to determine feasibility and acceptability of DOPS in assessment program. Result: Faculty development and trainees orientation in DOPS were found satisfactory for its acceptance and feasible for its practice. Trainees were mostly assessed in outpatient clinical setting. Majority reported higher rating of satisfaction by assessors and trainees. Among clinical skills higher rating was received in procedural skills performed by the senior trainees. Conclusion: DOPS was found feasible for practice of formative assessment of trainees in postgraduate program of Otolaryngology and Head-Neck Surgery in School of Medical Sciences (SMS) at Universiti Sains Malaysia (USM). It was well accepted by the trainees to help monitor their quality of procedural skills as self-directed learning.
  4. Shahid Hassan
    MyJurnal
    Context: Community-based medical education (CBME) has become widely accepted as an important innovation in undergraduate medical education. In curriculum featuring CBME, students are acquainted with the community early in their studies however; the impact of this training can be judged best to see them practice the required aspects of CBME. Malaysia is a multiracial country with a very strong community dependant life style. Main national health problems have called for a change in health profession education from traditional hospital based health care to community-based delivery system. Three major university's medical schools that either practice community oriented or community based medical education in undergraduate medical curriculum are evaluated. Universiti Sains Malaysia (USM) has a community based medical education (CBME) curriculum as Community and Family Case Study (CFCS) compared to a community oriented education curriculum (COE) adopted by Universiti Malaya (UM) and Universiti Kebangsaan Malaysia (UKM). However, UM at the time of undertaking this study back in 2005 was though practicing COE has also later opted CBME as CFCS.

    Objective: To determine whether medical graduates from USM with a community-based medical education in its curriculum for more than 25 years are inspired to have stronger commitment towards community health as shown in their on-job practice of medicine compared to other graduates from UM and UKM, who have adopted community-oriented medical education program.

    Method: A questionnaire-based pilot study with 12 items (variables) was designed to obtain supervisor's opinion on commitment of interns towards the health of community they serve. The questionnaire was administered to a randomized group of 85 specialists supervising the internship training program in five major disciplines including internal medicine, surgery, orthopaedic, gynaecology and obstetrics and paediatric medicine. The data received from 62 respondents from five major disciplines was analyzed utilizing SPSS version 12.0.1.

    Result: The responses received from 62 supervisors on an inventory in which 9 out of 12 variables were directly related to community commitments of interns. It was shown that the USM graduates who were taught through a CBME curriculum have performed better than the graduates from UM and UKM who followed a COE curriculum. P-value (< 0.001) was highly significant and consistent with higher mean score in those variables.

    Conclusion: The graduates taught through a CBME curriculum performed better in community commitments towards patients care compared to graduates from COE curriculum.
  5. Shahid Hassan, Rafidah Hod
    MyJurnal
    Background: Single best answer (SBA) as multiple-choice items are often advantageous to use for
    its reliability and validity. However, SBA requires good number of plausible distractors to achieve
    reliability. Apart from psychometric evaluation of assessment it is important to perform item analysis
    to improve quality of items by analysing difficulty index (DIF I), discrimination index (DI) and
    distractor efficiency (DE) based on number of non-functional distractors (NFD). Objective: To
    evaluate quality of SBA items administered in professional examination to apply corrective measures
    determined by DIF I, DI and DE using students’ assessment score. Method: An evaluation of post
    summative assessment (professional examination) of SBA items as part of psychometric assessment
    is performed after 86 weeks of teaching in preclinical phase of MD program. Forty SBA items and
    160 distractors inclusive of key were assessed using item analysis. Hundred and thirty six students’
    score of SBA was analysed for mean and standard deviation, DIF I, DI and DE using MS Excel 2007.
    Unpaired t-test was applied to determine DE in relation to DIF I and DI with level of significance.
    Item-total correlation (r) and internal consistency by Cronbach’s alpha and parallel-form method was
    also computed. Result: Fifteen items had DIF I = 0.31–0.61 and 25 items had DIF I (≤ 0.30 or ≥
    0.61). Twenty six items had DI = 0.15 – ≥ 0.25 compared to 14 items with DI (≤ 0.15). There were 26
    (65%) items with 1–3 NFD and 14 (35%) items without any NFD. Thirty nine (32.50%) distractors
    were with choice frequency = 0. Overall mean DE was 65.8% and NFD was 49 (40.5%). DE in
    relation to DIF I and DI were statistically significant with p = 0.010 and 0.020 respectively. Item-total
    correlation for most items was < 0.3. Internal consistency by Cronbach’s alpha in SBA Test 1 and 2
    was 0.51 and 0.41 respectively and constancy by parallel-form method was 0.57 between SBA Test 1
    and 2. Conclusion: The high frequency of difficult or easy items and moderate to poor discrimination
    suggest the need of items corrective measure. Increased number of NFD and low DE in this study
    indicates difficulty of teaching faculty in developing plausible distractors for SBA questions. This has
    been reflected in poor reliability established by alpha. Item analysis result emphasises the need of
    evaluation to provide feedback and to improve quality of SBA items in assessment.
  6. Shahid Hassan, Mohamad Najib Mat Pa, Muhamad Saiful Bahri Yusoff
    MyJurnal
    Background: Summative assessment in postgraduate examination globally employs multiple measures. A standard-setting method decides on pass or fail based on an arbitrarily defined cut-off point on a test score, which is often content expert’s subjective judgment. Contrary to this a standard-setting strategy primarily practices two approaches, a compensatory approach, which decides on overall performance as a sum of all the test scores and a conjunctive approach that requires passing performance for each instrument. However, the challenge using multiple measures is not due to number of measurement tools but due to logic by which the measures are combined to draw inferences on pass or fail in summative assessment. Conjoint University Board of Examination of Masters’ of Otolaryngology and Head-Neck Surgery (ORL-HNS) in Malaysia also uses multiple measures to reach a passing or failing decision in summative assessment. However, the standard setting strategy of assessment is loosely and variably applied to make ultimate decision on pass or fail. To collect the evidences, the summative assessment program of Masters’ of ORL-HNS in School of Medical Sciences at Universiti Sains Malaysia was analyzed for validity to evaluate the appropriateness of decisions in postgraduate medical education in Malaysia. Methodology: A retrospective study was undertaken to evaluate the validity of the conjoint summative assessment results of part II examination of USM candidates during May 2000-May 2011. The Pearson correlation and multiple linear regression tests were used to determine the discriminant and convergent validity of assessment tools. Pearson’s correlation coefficient analyzed the association between assessment tools and the multiple linear regression compared the dominant roles of factor variables in predicting outcomes. Based on outcome of the study, reforms for standard-setting strategy are also recommended towards programming the assessment in a surgical-based discipline. Results: The correlation coefficients of MCQ and essay questions were found not significant (0.16). Long and short cases were shown to have good correlations (0.53). Oral test stood as a component to show fair correlation with written (0.39-0.42) as well as clinical component (0.50-0.66). The predictive values in written tests suggested MCQ predicted by oral (B=0.34, P
  7. Shahid Hassan, Zafar Ahmed, Ahmad Fuad Abdul Rahim
    MyJurnal
    Background: Faculty’s role as educators is over looked in clinical education, even though the teaching has a direct reflection of performance of clinical competence and professional development of graduating doctors. Two major problems of clinical education are the lack of uniform teaching and learning strategies in postgraduate as well as later years of undergraduate clinical teaching and the professional development of faculty in teaching in medical institutions. Objective: The survey has two major objectives. First objective was to know about the faculty response to a survey on teaching while trying to create awareness for teaching and research in teaching. The second objective was to know the faculty members’ understanding with principles of learning and teaching with strengths and weaknesses of respondents’ performance in clinical teaching on completing The Educator’s Self-Reflective Inventory (ESRI). Method: The ESRI was administered to approach 214 faculty members in SMS at USM. Appraisal of self-reflection inventory as medical teacher and personal development with respect to challenges, opportunities, innovations and need assessment of teaching were explored in response to 35 items grouped in 5 clusters through a questionnaire-based survey utilizing ESRI. Result: Statistical analysis of respondent’s data indicates a mixed response with lab-based disciplines 54.54% followed by surgical-based disciplines 50% and medical-based disciplines 30%. Individual discipline best response is received from Plastic Surgery and ORL-HNS (100%) and Hematology (77.77%). A result of individual item response in each cluster WAS also analyzed. Conclusion: The survey evaluated the faculty’s response to ESRI and concern shown to develop their abilities as teachers and researchers in clinical teaching. However, the initial response suggested the need for more survey to continue creating the awareness for faculty development and research in teaching. Conclusion drawn from analysis of each items in inventory is encouraging for teaching in medical education.
  8. Shahid Hassan, Mohd Salami Ibrahim, Hassan, Nabiha Gul
    MyJurnal
    Delivery and implementation strategies are key to curriculum success. There is growing evidence
    that team-based learning (TBL) is an effective way of interactive teaching. TBL is a method that uses
    learning teams to enhance student engagement and quality of learning. Individual accountability for
    out-of-class reading is followed by individual and group assessment. In-class application exercises,
    which is the hallmark of team-based learning promotes both learning and team development. TBL
    uses educational principles of transforming traditional content into application of knowledge
    and problem solving skills in an interactive learning environment. To experience the structural
    framework and to determine the students’ perception about TBL in clinical setting of MBBS
    program in a Malaysian medical school. A total of 120 students assigned to 22 small subgroups of
    5–6 per group underwent a number of TBL sessions delivered in three phases. In Phase I, students
    were assigned reading material. In Phase II, students were assessed through One Best Answer
    (OBA) items for individual and group readiness assessment test as individual readiness assessment
    test (IRAT) and group readiness assurance test (GRAT) respectively followed by a mini-lecture.
    In Phase III, in-class application of learning activity was performed. Finally, peer assessment
    evaluated the contribution of peer in TBL. A TBL Classroom Evaluation Inventory (TBLCEI)
    developed to probe student’s perception of TBL, comprised of 40 items composite scale with
    Cronbach’s alpha at 0.881. In addition, students were asked to provide their estimated grade in
    end of the posting assessment. Grades were categorised into excellent pass >85%, high pass 70%–
    84%; average to good pass 50%–69% and fail
  9. Shahid Hassan, Ahmad Fuad Abdul Rahim, Mohamad Najib Mat Pa, Mohd Nor Gohar Rahman, Muhamad Saiful Bahri Yusoff
    MyJurnal
    Introduction: A clear concept and understanding about the measure and the measuring tools is essential for good practice of assessment. Assessors need to have information about the full range of assessment tools inclusive of psychometric validity and purpose of its use. Subjective inferences drawn from the readily available data as numbers of summative scores over the years and statistical evidences of reliability and validity of assessment tools used to measure student’s performance are good sources of feedback for competent assessment program. It also provides meaningful evaluation of learning and teaching in medical education. Method: A retrospective study of 119 candidates was carried out to analyze the summative assessment scores of their certifying examination of Masters of Surgery in School of Medical Sciences (SMS) at Universiti Sains Malaysia. Subjective judgment of raw data followed by internal consistency as reliability, convergent validity and discriminant validity as constructs of individual assessment tool was analyzed. Finally each assessment tool as a measure of written or clinical construct was evaluated against six aspects of Messick’s criteria for quality control. Result: The correlation coefficient for validity and Cronbach’s alpha for reliability was evaluated for clinical measures. However, the test of internal reliability was not possible for essay being the only measure in written construct of summative assessment in surgery. All measures of clinical construct were found highly reliable with Cronbach’s alpha between 0.962-0.979. Long case and the short cases have shown excellent correlations (r=0.959 at p
  10. Uday Younis Hussein Abdullah, Haitham Muhammed Jassim, Nor Iza Abdul Rahman, Tg Fatimah Murniwati Tengku Muda, Nordin Simbak, Shahid Hassan
    MyJurnal
    Introduction: Metacognition is the awareness of knowledge how one learns in addition to what
    one learns and to understand how a task will be performed. Metacognitive skill as self-assessment
    is recognised as an important contributor to the development of critical capacity, reflective attitude
    and autonomous life-long learning. Accurate, self-assessment of knowledge and skills is essential for
    students to maintain and improve through self-directed learning. Objective: The objective of this
    study was to explore, how well students’ evaluate their own level of understanding for lectures to
    reflect their metacognitive skill that can be used in educational strategy to promote students’ personal
    and professional growth. Methods: To assess the metacognition of the students, a questionnaire based
    on three items was designed. All 60 (17 male and 43 female) preclinical, first-year medical students
    were included in this study. The metacognition as planning, monitoring and evaluating the lecture was
    judged through students’ response on 33 lectures in terms of understanding of knowledge, clearing
    of misconceptions and presenting of a well prepared material respectively in the field of haematology
    and parasitology. Metacognition as reflected in the lecture understanding level (LUL) score, lectures
    preparation level (LPL) score and students question level (SQL) score was estimated for its correlation
    with student’ achievement score in pre-clinical phase of MBBS program. Results: The data was
    analysed for correlation between metacognition and overall students’ achievement scores and a
    statistically significant correlation between LUL and multiple true false (MTF) of 268 (p = .039),
    LPL and MTF of .282 (p = .029) as well as between SQL and MTF of .360 (p = .005) was compared
    to poor correlation between LUL, LPL and SQL and the other three assessment tools (short essay
    questions [SEQ], problem-based questions [PBQ] and objectively structured practical examination
    [OSPE]) was found. Conclusion: The significant correlation of students’ metacognition and their
    achievement score in classroom setting with MTF and poor correlation with SEQ, PBQ and OSPE is
    attributed to multiple factors discussed in this study, imperative to students’ personal and professional
    growth.
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links