Displaying all 3 publications

Abstract:
Sort:
  1. Harel D, Wu Y, Levis B, Fan S, Sun Y, Xu M, et al.
    J Affect Disord, 2024 Sep 15;361:674-683.
    PMID: 38908554 DOI: 10.1016/j.jad.2024.06.033
    Administration mode of patient-reported outcome measures (PROMs) may influence responses. We assessed if Patient Health Questionnaire-9 (PHQ-9), Edinburgh Postnatal Depression Scale (EPDS) and Hospital Anxiety and Depression Scale - Depression subscale (HADS-D) item responses and scores were associated with administration mode. We compared (1) self-administration versus interview-administration; within self-administration (2) research or medical setting versus private; and (3) pen-and-paper versus electronic; and within interview-administration (4) in-person versus phone. We analysed individual participant data meta-analysis datasets with item-level data for the PHQ-9 (N = 34,529), EPDS (N = 16,813), and HADS-D (N = 16,768). We used multiple indicator multiple cause models to assess differential item functioning (DIF) by administration mode. We found statistically significant DIF for most items on all measures due to large samples, but influence on total scores was negligible. In 10 comparisons conducted across the PHQ-9, EPDS, and HADS-D, Pearson's correlations and intraclass correlation coefficients between latent depression symptom scores from models that did or did not account for DIF were between 0.995 and 1.000. Total PHQ-9, EPDS, and HADS-D scores did not differ materially across administration modes. Researcher and clinicians who evaluate depression symptoms with these questionnaires can select administration methods based on patient preferences, feasibility, or cost.
  2. Wu Y, Levis B, Daray FM, Ioannidis JPA, Patten SB, Cuijpers P, et al.
    Psychol Assess, 2023 Feb;35(2):95-114.
    PMID: 36689386 DOI: 10.1037/pas0001181
    The seven-item Hospital Anxiety and Depression Scale Depression subscale (HADS-D) and the total score of the 14-item HADS (HADS-T) are both used for major depression screening. Compared to the HADS-D, the HADS-T includes anxiety items and requires more time to complete. We compared the screening accuracy of the HADS-D and HADS-T for major depression detection. We conducted an individual participant data meta-analysis and fit bivariate random effects models to assess diagnostic accuracy among participants with both HADS-D and HADS-T scores. We identified optimal cutoffs, estimated sensitivity and specificity with 95% confidence intervals, and compared screening accuracy across paired cutoffs via two-stage and individual-level models. We used a 0.05 equivalence margin to assess equivalency in sensitivity and specificity. 20,700 participants (2,285 major depression cases) from 98 studies were included. Cutoffs of ≥7 for the HADS-D (sensitivity 0.79 [0.75, 0.83], specificity 0.78 [0.75, 0.80]) and ≥15 for the HADS-T (sensitivity 0.79 [0.76, 0.82], specificity 0.81 [0.78, 0.83]) minimized the distance to the top-left corner of the receiver operating characteristic curve. Across all sets of paired cutoffs evaluated, differences of sensitivity between HADS-T and HADS-D ranged from -0.05 to 0.01 (0.00 at paired optimal cutoffs), and differences of specificity were within 0.03 for all cutoffs (0.02-0.03). The pattern was similar among outpatients, although the HADS-T was slightly (not nonequivalently) more specific among inpatients. The accuracy of HADS-T was equivalent to the HADS-D for detecting major depression. In most settings, the shorter HADS-D would be preferred. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
  3. Levis B, Bhandari PM, Neupane D, Fan S, Sun Y, He C, et al.
    JAMA Netw Open, 2024 Nov 04;7(11):e2429630.
    PMID: 39576645 DOI: 10.1001/jamanetworkopen.2024.29630
    IMPORTANCE: Test accuracy studies often use small datasets to simultaneously select an optimal cutoff score that maximizes test accuracy and generate accuracy estimates.

    OBJECTIVE: To evaluate the degree to which using data-driven methods to simultaneously select an optimal Patient Health Questionnaire-9 (PHQ-9) cutoff score and estimate accuracy yields (1) optimal cutoff scores that differ from the population-level optimal cutoff score and (2) biased accuracy estimates.

    DESIGN, SETTING, AND PARTICIPANTS: This study used cross-sectional data from an existing individual participant data meta-analysis (IPDMA) database on PHQ-9 screening accuracy to represent a hypothetical population. Studies in the IPDMA database compared participant PHQ-9 scores with a major depression classification. From the IPDMA population, 1000 studies of 100, 200, 500, and 1000 participants each were resampled.

    MAIN OUTCOMES AND MEASURES: For the full IPDMA population and each simulated study, an optimal cutoff score was selected by maximizing the Youden index. Accuracy estimates for optimal cutoff scores in simulated studies were compared with accuracy in the full population.

    RESULTS: The IPDMA database included 100 primary studies with 44 503 participants (4541 [10%] cases of major depression). The population-level optimal cutoff score was 8 or higher. Optimal cutoff scores in simulated studies ranged from 2 or higher to 21 or higher in samples of 100 participants and 5 or higher to 11 or higher in samples of 1000 participants. The percentage of simulated studies that identified the true optimal cutoff score of 8 or higher was 17% for samples of 100 participants and 33% for samples of 1000 participants. Compared with estimates for a cutoff score of 8 or higher in the population, sensitivity was overestimated by 6.4 (95% CI, 5.7-7.1) percentage points in samples of 100 participants, 4.9 (95% CI, 4.3-5.5) percentage points in samples of 200 participants, 2.2 (95% CI, 1.8-2.6) percentage points in samples of 500 participants, and 1.8 (95% CI, 1.5-2.1) percentage points in samples of 1000 participants. Specificity was within 1 percentage point across sample sizes.

    CONCLUSIONS AND RELEVANCE: This study of cross-sectional data found that optimal cutoff scores and accuracy estimates differed substantially from population values when data-driven methods were used to simultaneously identify an optimal cutoff score and estimate accuracy. Users of diagnostic accuracy evidence should evaluate studies of accuracy with caution and ensure that cutoff score recommendations are based on adequately powered research or well-conducted meta-analyses.

Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links