STUDY DESIGN AND SETTINGS: The Online Randomized Controlled Trials of Health Information Database was used as the sampling frame to identify a subset of self-recruited online trials of self-management interventions. The authors cataloged what these online trials were assessing, appraised study quality, extracted information on how trials were run, and assessed the potential for bias. We searched out how public and patient participation was integrated into online trial design and how this was reported. We recorded patterns of use for registration, reporting, settings, informed consent, public involvement, supplementary materials, and dissemination planning.
RESULTS: The sample included 41 online trials published from 2002 to 2015. The barriers to replicability and risk of bias in online trials included inadequate reporting of blinding in 28/41 (68%) studies; high attrition rates with incomplete or unreported data in 30/41 (73%) of trials; and 26/41 (63%) of studies were at high risk for selection bias as trial registrations were unreported. The methods for (23/41, 56%) trials contained insufficient information to replicate the trial, 19/41 did not report piloting the intervention. Only 2/41 studies were cross-platform compatible. Public involvement was most common for advisory roles (n = 9, 22%), and in the design, usability testing, and piloting of user materials (n = 9, 22%).
CONCLUSION: This study catalogs the state of online trials of self-management in the early 21st century and provides insights for online trials development as early as the protocol planning stage. Reporting of trials was generally poor and, in addition to recommending that authors report their trials in accordance with CONSORT guidelines, we make recommendations for researchers writing protocols, reporting on and evaluating online trials. The research highlights considerable room for improvement in trial registration, reporting of methods, data management plans, and public and patient involvement in self-recruited online trials of self-management interventions.
STUDY DESIGN AND SETTING: Forty-seven rehabilitation clinicians of 5 professions from 7 teams (Belgium, Italy, Malaysia, Pakistan, Poland, Puerto Rico, the USA) reviewed 76 RCTs published by main rehabilitation journals exploring 14 domains chosen through consensus and piloting.
RESULTS: The response rate was 99%. Inter-rater agreement was moderate/good. All clinicians considered unanimously 12 (16%) RCTs clinically replicable and none not replicable. At least one "absent" information was found by all participants in 60 RCTs (79%), and by a minimum of 85% in the remaining 16 (21%). Information considered to be less well described (8-19% "perfect" information) included two providers (skills, experience) and two delivery (cautions, relationships) items. The best described (50-79% "perfect") were the classic methodological items included in CONSORT (descending order: participants, materials, procedures, setting, and intervention).
CONCLUSION: Clinical replicability must be considered in RCTs reporting, particularly for complex interventions. Classical methodological checklists such as CONSORT are not enough, and also Template for Intervention Description and Clinical replication do not cover all the requirements. This study supports the need for field-specific checklists.
STUDY DESIGN AND SETTING: This guidance was developed based on a series of methodological meetings, review of internationally renowned guidance such as the Cochrane Handbook and the Preferred Reporting Items for Systematic Reviews and Meta-Analysis for equity-focused systematic reviews (PRISMA-Equity) guideline. We identified Exemplar rapid reviews by searching COVID-19 databases and requesting examples from our team.
RESULTS: We proposed the following key steps: 1. involve relevant stakeholders with lived experience in the conduct and design of the review; 2. reflect on equity, inclusion and privilege in team values and composition; 3. develop research question to assess health inequities; 4. conduct searches in relevant disciplinary databases; 5. collect data and critically appraise recruitment, retention and attrition for populations experiencing inequities; 6. analyse evidence on equity; 7. evaluate the applicability of findings to populations experiencing inequities; and 8. adhere to reporting guidelines for communicating review findings. We illustrated these methods through rapid review examples.
CONCLUSION: Implementing this guidance could contribute to improving equity considerations in rapid reviews produced in public health emergencies, and help policymakers better understand the distributional impact of diseases on the population.