STUDY DESIGN AND SETTINGS: The Online Randomized Controlled Trials of Health Information Database was used as the sampling frame to identify a subset of self-recruited online trials of self-management interventions. The authors cataloged what these online trials were assessing, appraised study quality, extracted information on how trials were run, and assessed the potential for bias. We searched out how public and patient participation was integrated into online trial design and how this was reported. We recorded patterns of use for registration, reporting, settings, informed consent, public involvement, supplementary materials, and dissemination planning.
RESULTS: The sample included 41 online trials published from 2002 to 2015. The barriers to replicability and risk of bias in online trials included inadequate reporting of blinding in 28/41 (68%) studies; high attrition rates with incomplete or unreported data in 30/41 (73%) of trials; and 26/41 (63%) of studies were at high risk for selection bias as trial registrations were unreported. The methods for (23/41, 56%) trials contained insufficient information to replicate the trial, 19/41 did not report piloting the intervention. Only 2/41 studies were cross-platform compatible. Public involvement was most common for advisory roles (n = 9, 22%), and in the design, usability testing, and piloting of user materials (n = 9, 22%).
CONCLUSION: This study catalogs the state of online trials of self-management in the early 21st century and provides insights for online trials development as early as the protocol planning stage. Reporting of trials was generally poor and, in addition to recommending that authors report their trials in accordance with CONSORT guidelines, we make recommendations for researchers writing protocols, reporting on and evaluating online trials. The research highlights considerable room for improvement in trial registration, reporting of methods, data management plans, and public and patient involvement in self-recruited online trials of self-management interventions.
METHODS: Eleven databases were searched without date or language restrictions for systematic reviews of public and patient involvement (PPI) in clinical trials design. This systematic overview of PPI included 27 reviews from which areas of good and bad practice were identified. Strengths, weaknesses, opportunities, and threats of PPI were explored through use of meta-narrative analysis.
RESULTS: Inclusion criteria were met by 27 reviews ranging in quality from high (n = 7), medium (n = 14) to low (n = 6) reviews. Reviews were assessed using CERQUAL NICE, CASP for qualitative research and CASP for systematic reviews. Four reviews report risk of bias. Public involvement roles were primarily in agenda setting, steering committees, ethical review, protocol development, and piloting. Research summaries, follow-up, and dissemination contained PPI, with lesser involvement in data collection, analysis, or manuscript authoring. Trialists report difficulty in finding, retaining, and reimbursing volunteers. Respectful inclusion, role recognition, mutual flexibility, advance planning, and sound methods were reported as facilitating public involvement in research. Public involvement was reported to have increased the quantity and quality of patient relevant priorities and outcomes, enrollment, funding, design, implementation, and dissemination. Challenges identified include lack of clarity within common language, roles, and research boundaries, while logistical needs include extra time, training, and funding. Researchers report struggling to report involvement and avoid tokenism.
CONCLUSIONS: Involving patients and the public in clinical trials design can be beneficial but requires resources, preparation, training, flexibility, and time. Issues to address include reporting deficits for risk of bias, study quality, and conflicts of interests. We need to address these tensions and improve dissemination strategies to increase PPI and health literacy.