METHODS: Based on a preregistered protocol (CRD42022377671), we searched PubMed, Medline, Ovid Embase, APA PsycINFO and Web of Science on 15th August 2022, with no language/type of document restrictions. We included studies reporting accuracy measures (e.g. sensitivity, specificity, or Area under the Receiver Operating Characteristics Curve, AUC) for QbTest in discriminating between people with and without DSM/ICD ADHD diagnosis. Risk of bias was assessed with the Quality Assessment of Diagnostic Accuracy Studies tool (QUADAS-2). A generic inverse variance meta-analysis was conducted on AUC scores. Pooled sensitivity and specificity were calculated using a random-effects bivariate model in R.
RESULTS: We included 15 studies (2,058 participants; 48.6% with ADHD). QbTest Total scores showed acceptable, rather than good, sensitivity (0.78 [95% confidence interval: 0.69; 0.85]) and specificity (0.70 [0.57; 0.81]), while subscales showed low-to-moderate sensitivity (ranging from 0.48 [0.35; 0.61] to 0.65 [0.52; 0.75]) and moderate-to-good specificity (from 0.65 [0.48; 0.78] to 0.83 [0.60; 0.94]). Pooled AUC scores suggested moderate-to-acceptable discriminative ability (Q-Total: 0.72 [0.57; 0.87]; Q-Activity: 0.67 [0.58; 0.77); Q-Inattention: 0.66 [0.59; 0.72]; Q-Impulsivity: 0.59 [0.53; 0.64]).
CONCLUSIONS: When used on their own, QbTest scores available to clinicians are not sufficiently accurate in discriminating between ADHD and non-ADHD clinical cases. Therefore, the QbTest should not be used as stand-alone screening or diagnostic tool, or as a triage system for accepting individuals on the waiting-list for clinical services. However, when used as an adjunct to support a full clinical assessment, QbTest can produce efficiencies in the assessment pathway and reduce the time to diagnosis.
METHODS: A general population sample of children and parents was recruited. Dimensionality of the PedsPCF was assessed using confirmatory factor analyses and exploratory bifactor analyses. Item response theory (IRT) modeling was used to evaluate model fit of the PedsPCF, to identify differential item functioning (DIF), and to select items for the short form. To select short-form items, we also considered the neuropsychological content of items.
RESULTS: In 1441 families, a parent and/or child participated (response rate 66% at family level). Assessed psychometric properties were satisfactory and the predominantly unidimensional factor structure of the PedsPCF allowed for IRT modeling using the graded response model. One item showed meaningful DIF. For the short form, 10 items were selected.
CONCLUSIONS: In this first study of the PedsPCF outside the United States, studied psychometric properties of the translated PedsPCF were satisfactory, and allowed for IRT modeling. Based on the IRT analyses and the content of items, we proposed a new 10-item short form. Further research should determine the relation of PedsPCF outcomes with neurocognitive measures and its ability to facilitate neuropsychological screening in clinical practice.