Displaying all 4 publications

Abstract:
Sort:
  1. Craig L, Hoo ZL, Yan TZ, Wardlaw J, Quinn TJ
    J Neurol Neurosurg Psychiatry, 2022 02;93(2):180-187.
    PMID: 34782389 DOI: 10.1136/jnnp-2020-325796
    An understanding of the epidemiology of poststroke dementia (PSD) is necessary to inform research, practice and policy. With increasing primary studies, a contemporary review of PSD could allow for analyses of incidence and prevalence trends. Databases were searched using a prespecified search strategy. Eligible studies described an ischaemic or mixed stroke cohort with prospective clinical assessment for dementia. Pooled prevalence of dementia was calculated using random-effects models at any time after stroke (primary outcome) and at 1 year (range: 6-18 months), stratified for inclusion of prestroke dementia. Meta-regression explored the effect of year of study. Sensitivity analyses removed low-quality or outlier studies. Of 12 505 titles assessed, 44 studies were included in the quantitative analyses. At any time point after stroke, the prevalence of PSD was 16.5% (95% CI 10.4% to 25.1%) excluding prestroke dementia and 22.3% (95% CI 18.8% to 26.2%) including prestroke dementia. At 1 year, the prevalence of PSD was 18.4% (95% CI 7.4% to 38.7%) and 20.4% (95% CI 14.2% to 28.2%) with prestroke dementia included. In studies including prestroke dementia there was a negative association between dementia prevalence and year of study (slope coefficient=-0.05 (SD: 0.01), p<0.0001). Estimates were robust to sensitivity analyses. Dementia is common following stroke. At any point following stroke, more than one in five people will have dementia, although a proportion of this dementia predates the stroke. Declining prevalence of prestroke dementia may explain apparent reduction in PSD over time. Risk of dementia following stroke remains substantial and front-loaded, with high prevalence at 1 year post event.
  2. Wu Y, Levis B, Daray FM, Ioannidis JPA, Patten SB, Cuijpers P, et al.
    Psychol Assess, 2023 Feb;35(2):95-114.
    PMID: 36689386 DOI: 10.1037/pas0001181
    The seven-item Hospital Anxiety and Depression Scale Depression subscale (HADS-D) and the total score of the 14-item HADS (HADS-T) are both used for major depression screening. Compared to the HADS-D, the HADS-T includes anxiety items and requires more time to complete. We compared the screening accuracy of the HADS-D and HADS-T for major depression detection. We conducted an individual participant data meta-analysis and fit bivariate random effects models to assess diagnostic accuracy among participants with both HADS-D and HADS-T scores. We identified optimal cutoffs, estimated sensitivity and specificity with 95% confidence intervals, and compared screening accuracy across paired cutoffs via two-stage and individual-level models. We used a 0.05 equivalence margin to assess equivalency in sensitivity and specificity. 20,700 participants (2,285 major depression cases) from 98 studies were included. Cutoffs of ≥7 for the HADS-D (sensitivity 0.79 [0.75, 0.83], specificity 0.78 [0.75, 0.80]) and ≥15 for the HADS-T (sensitivity 0.79 [0.76, 0.82], specificity 0.81 [0.78, 0.83]) minimized the distance to the top-left corner of the receiver operating characteristic curve. Across all sets of paired cutoffs evaluated, differences of sensitivity between HADS-T and HADS-D ranged from -0.05 to 0.01 (0.00 at paired optimal cutoffs), and differences of specificity were within 0.03 for all cutoffs (0.02-0.03). The pattern was similar among outpatients, although the HADS-T was slightly (not nonequivalently) more specific among inpatients. The accuracy of HADS-T was equivalent to the HADS-D for detecting major depression. In most settings, the shorter HADS-D would be preferred. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
  3. Wu Y, Levis B, Sun Y, Krishnan A, He C, Riehm KE, et al.
    J Psychosom Res, 2020 02;129:109892.
    PMID: 31911325 DOI: 10.1016/j.jpsychores.2019.109892
    OBJECTIVE: Two previous individual participant data meta-analyses (IPDMAs) found that different diagnostic interviews classify different proportions of people as having major depression overall or by symptom levels. We compared the odds of major depression classification across diagnostic interviews among studies that administered the Depression subscale of the Hospital Anxiety and Depression Scale (HADS-D).

    METHODS: Data accrued for an IPDMA on HADS-D diagnostic accuracy were analysed. We fit binomial generalized linear mixed models to compare odds of major depression classification for the Structured Clinical Interview for DSM (SCID), Composite International Diagnostic Interview (CIDI), and Mini International Neuropsychiatric Interview (MINI), controlling for HADS-D scores and participant characteristics with and without an interaction term between interview and HADS-D scores.

    RESULTS: There were 15,856 participants (1942 [12%] with major depression) from 73 studies, including 15,335 (97%) non-psychiatric medical patients, 164 (1%) partners of medical patients, and 357 (2%) healthy adults. The MINI (27 studies, 7345 participants, 1066 major depression cases) classified participants as having major depression more often than the CIDI (10 studies, 3023 participants, 269 cases) (adjusted odds ratio [aOR] = 1.70 (0.84, 3.43)) and the semi-structured SCID (36 studies, 5488 participants, 607 cases) (aOR = 1.52 (1.01, 2.30)). The odds ratio for major depression classification with the CIDI was less likely to increase as HADS-D scores increased than for the SCID (interaction aOR = 0.92 (0.88, 0.96)).

    CONCLUSION: Compared to the SCID, the MINI may diagnose more participants as having major depression, and the CIDI may be less responsive to symptom severity.

  4. Levis B, Bhandari PM, Neupane D, Fan S, Sun Y, He C, et al.
    JAMA Netw Open, 2024 Nov 04;7(11):e2429630.
    PMID: 39576645 DOI: 10.1001/jamanetworkopen.2024.29630
    IMPORTANCE: Test accuracy studies often use small datasets to simultaneously select an optimal cutoff score that maximizes test accuracy and generate accuracy estimates.

    OBJECTIVE: To evaluate the degree to which using data-driven methods to simultaneously select an optimal Patient Health Questionnaire-9 (PHQ-9) cutoff score and estimate accuracy yields (1) optimal cutoff scores that differ from the population-level optimal cutoff score and (2) biased accuracy estimates.

    DESIGN, SETTING, AND PARTICIPANTS: This study used cross-sectional data from an existing individual participant data meta-analysis (IPDMA) database on PHQ-9 screening accuracy to represent a hypothetical population. Studies in the IPDMA database compared participant PHQ-9 scores with a major depression classification. From the IPDMA population, 1000 studies of 100, 200, 500, and 1000 participants each were resampled.

    MAIN OUTCOMES AND MEASURES: For the full IPDMA population and each simulated study, an optimal cutoff score was selected by maximizing the Youden index. Accuracy estimates for optimal cutoff scores in simulated studies were compared with accuracy in the full population.

    RESULTS: The IPDMA database included 100 primary studies with 44 503 participants (4541 [10%] cases of major depression). The population-level optimal cutoff score was 8 or higher. Optimal cutoff scores in simulated studies ranged from 2 or higher to 21 or higher in samples of 100 participants and 5 or higher to 11 or higher in samples of 1000 participants. The percentage of simulated studies that identified the true optimal cutoff score of 8 or higher was 17% for samples of 100 participants and 33% for samples of 1000 participants. Compared with estimates for a cutoff score of 8 or higher in the population, sensitivity was overestimated by 6.4 (95% CI, 5.7-7.1) percentage points in samples of 100 participants, 4.9 (95% CI, 4.3-5.5) percentage points in samples of 200 participants, 2.2 (95% CI, 1.8-2.6) percentage points in samples of 500 participants, and 1.8 (95% CI, 1.5-2.1) percentage points in samples of 1000 participants. Specificity was within 1 percentage point across sample sizes.

    CONCLUSIONS AND RELEVANCE: This study of cross-sectional data found that optimal cutoff scores and accuracy estimates differed substantially from population values when data-driven methods were used to simultaneously identify an optimal cutoff score and estimate accuracy. Users of diagnostic accuracy evidence should evaluate studies of accuracy with caution and ensure that cutoff score recommendations are based on adequately powered research or well-conducted meta-analyses.

Related Terms
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links