METHODS: A web-based survey was sent to neonatologists from 16 provinces representing 59.6% (824.2 million) of the total population of China on October 2015 and December 2017.
RESULTS: A total of 117 and 219 responses were received in 2015 and 2017, respectively. Compared to 2015, respondents in 2017 were more likely to resuscitate infants <25 weeks of gestation (86% vs. 72%; p < 0.05), but few would resuscitate infants ≤23 weeks of gestation in either epoch (10% vs. 6%). In both epochs, parents were responsible for >50% of the costs of intensive care, but in 2017, significantly fewer clinicians would cease intensive care (75% vs. 88%; p < 0.05) and more would request for economic aid (40% vs. 20%; p < 0.05) if parents could not afford to pay. Resource availability (e.g. ventilators) was not an important factor in either initiation or continuation of intensive care (~60% in both epochs).
CONCLUSION: Cost is an important factor in the initiation and continuation of neonatal intensive care in a developing country like China. Such factors need to be taken into consideration when interpreting outcome data from these regions.
STUDY DESIGN: Prospective observational cohort study.
SETTING & PARTICIPANTS: 552 children and adolescents from 27 countries on maintenance HD followed up prospectively by the International Pediatric HD Network (IPHN) Registry between 2012 and 2017.
PREDICTOR: Type of vascular access: AVF, central venous catheter (CVC), or arteriovenous graft.
OUTCOME: Infectious and noninfectious vascular access complication rates, dialysis performance, biochemical and hematologic parameters, and clinical outcomes.
ANALYTICAL APPROACH: Univariate and multivariable linear mixed models, generalized linear mixed models, and proportional hazards models; cumulative incidence functions.
RESULTS: During 314 cumulative patient-years, 628 CVCs, 225 AVFs, and 17 arteriovenous grafts were placed. One-third of the children with an AVF required a temporary CVC until fistula maturation. Vascular access choice was associated with age and expectations for early transplantation. There was a 3-fold higher living related transplantation rate and lower median time to transplantation of 14 (IQR, 6-23) versus 20 (IQR, 14-36) months with CVCs compared with AVFs. Higher blood flow rates and Kt/Vurea were achieved with AVFs than with CVCs. Infectious complications were reported only with CVCs (1.3/1,000 catheter-days) and required vascular access replacement in 47%. CVC dysfunction rates were 2.5/1,000 catheter-days compared to 1.2/1,000 fistula-days. CVCs required 82% more revisions and almost 3-fold more vascular access replacements to a different site than AVFs (P<0.001).
LIMITATIONS: Clinical rather than population-based data.
CONCLUSIONS: CVCs are the predominant vascular access choice in children receiving HD within the IPHN. Age-related anatomical limitations and expected early living related transplantation were associated with CVC use. CVCs were associated with poorer dialysis efficacy, higher complication rates, and more frequent need for vascular access replacement. Such findings call for a re-evaluation of pediatric CVC use and practices.
METHODS: This was a cross-sectional study involving PCP with ≥1-year working experience in Malaysian primary care settings. An adapted and validated 25-item FH-KAP questionnaire was disseminated during primary care courses. Total score for each domain was calculated by summing-up the correct responses, converted into percentage scores. Normality distribution was examined and comparisons of mean/median percentage scores were made between the two groups of PCP.
RESULTS: A total of 372 PCP completed the questionnaire. Regarding knowledge, 77.7% correctly defined FH. However, only 8.3% correctly identified coronary artery disease risk in untreated FH. The mean percentage knowledge score was significantly higher in PCP-PG-Qual compared to PCP-noPG-Qual (48.9, SD ± 13.92 vs. 35.2, SD ± 14.13), t(370) = 8.66, p
DESIGN: A prospective study.
SETTING: A tertiary hospital in Malaysia.
POPULATION: A cohort of 193 term nulliparous women with intact membranes.
METHODS: Prior to labour induction, cervical fluid was obtained via a vaginal speculum and tested for IGFBP-1, followed by TVUS and finally Bishop score. After each assessment the procedure-related pain was scored from 0 to 10. Cut-off values for Bishop score and cervical length were obtained from the receiver operating characteristic (ROC) curve. Multivariable logistic regression analysis was performed.
MAIN OUTCOMES MEASURES: Vaginal delivery and vaginal delivery within 24 hours of starting induction.
RESULTS: Bedside IGFBP-1 testing is better tolerated than Bishop score, but is less well tolerated than TVUS [median (interquartile range) of pain scores: 5 (4-5) versus 6 (5-7) versus 3 (2-3), respectively; P < 0.001]. IGFBP-1 independently predicted vaginal delivery (adjusted odds ratio, AOR 5.5; 95% confidence interval, 95% CI 2.3-12.9) and vaginal delivery within 24 hours of induction (AOR 4.9; 95% CI 2.1-11.6) after controlling for Bishop score (≥4 or ≥5), cervical length (≤29 or ≤27 mm), and other significant characteristics for which the Bishop score and TVUS were not predictive of vaginal delivery after adjustment. IGFBP-1 has 81% sensitivity, 59% specificity, positive and negative predictive values of 82 and 58%, respectively, and positive and negative likelihood ratios of 2.0 and 0.3 for vaginal delivery, respectively.
CONCLUSION: IGFBP-1 better predicted vaginal delivery than BS or TVUS, and may help guide decision making regarding labour induction in nulliparous women.
TWEETABLE ABSTRACT: IGFBP-1: a stronger independent predictor of labour induction success than Bishop score or cervical sonography.
METHOD: Two categories of participants, i.e., medical doctors (n = 11) and final year medical students (Group 1, n = 5; Group 2, n = 10) participated in four separate focus group discussions. Nielsen's 5 dimensions of usability (i.e. learnability, effectiveness, memorability, errors, and satisfaction) and Pentland's narrative network were adapted as the framework to study the usability and the implementation of the checklist in a real clinical setting respectively.
RESULTS: Both categories (medical doctors and medical students) of participants found that the TWED checklist was easy to learn and effective in promoting metacognition. For medical student participants, items "T" and "W" were believed to be the two most useful aspects of the checklist, whereas for the doctor participants, it was item "D". Regarding its implementation, item "T" was applied iteratively, items "W" and "E" were applied when the outcomes did not turn out as expected, and item "D" was applied infrequently. The one checkpoint where all four items were applied was after the initial history taking and physical examination had been performed to generate the initial clinical impression.
CONCLUSION: A metacognitive checklist aimed to check cognitive errors may be a useful tool that can be implemented in the real clinical setting.
METHODS: A total of 88 final year medical students were assigned to either an educational intervention group or a control group in a non-equivalent group post-test only design. Participants in the intervention group received a tutorial on the use of a mnemonic checklist aimed to minimize cognitive errors in clinical decision-making. Two weeks later, the participants in both groups were given a script concordance test consisting of 10 cases, with 3 items per case, to assess their clinical decisions when additional data are given in the case scenarios.
RESULTS: The Mann-Whitney U-test performed on the total scores from both groups showed no statistical significance (U = 792, z = -1.408, p = 0.159). When comparisons were made for the first half and the second half of the SCT, it was found that participants in the intervention group performed significantly better than participants in the control group in the first half of the test, with median scores of 9.15 (IQR 8.00-10.28) vs. 8.18 (IQR 7.16-9.24) respectively, U = 642.5, z = -2.661, p = 0.008. No significant difference was found in the second half of the test, with the median score of 9.58 (IQR 8.90-10.56) vs. 9.81 (IQR 8.83-11.12) for the intervention group and control group respectively (U = 897.5, z = -0.524, p = 0.60).
CONCLUSION: Checklist use in differential diagnoses consideration did show some benefit. However, this benefit seems to have been traded off by the time and effort in using it. More research is needed to determine whether this benefit could be translated into clinical practice after repetitive use.
METHODS: Cross sectional questionnaire survey conducted among a convenience sample of physicians that likely comprise code team members in their country (Indonesia, Israel and Mexico). The questionnaire included details regarding respondent demographics and training, personal value judgments and preferences as well as professional experience regarding CPR and forgoing of resuscitation.
RESULTS: Of the 675 questionnaires distributed, 617 (91.4%) were completed and returned. Country of practice and level of knowledge about resuscitation were strongly associated with avoiding CPR performance. Mexican physicians were almost twicemore likely to forgo CPR than their Israeli and Indonesian/Malaysian counterparts [OR1.84 (95% CI 1.03, 3.26), p = 0.038]. Mexican responders also placed greater emphasison personal and patient quality of life (p
EXPERIMENTAL DESIGN: Tumor tissue EGFRm status was determined at screening using the central cobas tissue test or a local tissue test. Baseline circulating tumor (ct)DNA EGFRm status was retrospectively determined with the central cobas plasma test.
RESULTS: Of 994 patients screened, 556 were randomized (289 and 267 with central and local EGFR test results, respectively) and 438 failed screening. Of those randomized from local EGFR test results, 217 patients had available central test results; 211/217 (97%) were retrospectively confirmed EGFRm positive by central cobas tissue test. Using reference central cobas tissue test results, positive percent agreements with cobas plasma test results for Ex19del and L858R detection were 79% [95% confidence interval (CI), 74-84] and 68% (95% CI, 61-75), respectively. Progression-free survival (PFS) superiority with osimertinib over comparator EGFR-TKI remained consistent irrespective of randomization route (central/local EGFRm-positive tissue test). In both treatment arms, PFS was prolonged in plasma ctDNA EGFRm-negative (23.5 and 15.0 months) versus -positive patients (15.2 and 9.7 months).
CONCLUSIONS: Our results support utility of cobas tissue and plasma testing to aid selection of patients with EGFRm advanced NSCLC for first-line osimertinib treatment. Lack of EGFRm detection in plasma was associated with prolonged PFS versus patients plasma EGFRm positive, potentially due to patients having lower tumor burden.
PATIENTS AND METHODS: Sixty-two patients with AML excluding acute promyelocytic leukemia were retrospectively analyzed. Patients in the earlier cohort (n = 36) were treated on the Medical Research Council (MRC) AML12 protocol, whereas those in the recent cohort (n = 26) were treated on the Malaysia-Singapore AML protocol (MASPORE 2006), which differed in terms of risk group stratification, cumulative anthracycline dose, and timing of hematopoietic stem-cell transplantation for high-risk patients.
RESULTS: Significant improvements in 10-year overall survival and event-free survival were observed in patients treated with the recent MASPORE 2006 protocol compared to the earlier MRC AML12 protocol (overall survival: 88.0% ± 6.5% vs 50.1% ± 8.6%, P = .002; event-free survival: 72.1% ± 9.0 vs 50.1% ± 8.6%, P = .045). In univariate analysis, patients in the recent cohort had significantly lower intensive care unit admission rate (11.5% vs 47.2%, P = .005) and numerically lower relapse rate (26.9% vs 50.0%, P = .068) compared to the earlier cohort. Multivariate analysis showed that treatment protocol was the only independent predictive factor for overall survival (hazard ratio = 0.21; 95% confidence interval, 0.06-0.73, P = .014).
CONCLUSION: Outcomes of pediatric AML patients have improved over time. The more recent MASPORE 2006 protocol led to significant improvement in long-term survival rates and reduction in intensive care unit admission rate.
MATERIALS AND METHODS: Snakebite patients were prospectively recruited between 2017 and 2019. All patients were examined with POCUS to locate edema and directly visualize and measure the arterial flow in the compressed artery. The presence of DRAF in the compressed artery is suggestive of ACS development because when compartment space restriction occurs, increased retrograde arterial flow is observed in the artery.
RESULTS: Twenty-seven snakebite patients were analyzed. Seventeen patients (63%) were bitten by Crotalinae snakes, seven (26%) by Colubridae, one (4%) by Elapidae, and two (7%) had unidentified snakebites. All Crotalinae bit patients received antivenom, had subcutaneous edema and lacked DRAF in a POCUS examination series.
DISCUSSION: POCUS facilitates clinical decisions for snakebite envenomation. We also highlighted that the anatomic site of the snakebite is an important factor affecting the prognosis of the wounds. There were limitations of this study, including a small number of patients and no comparison with the generally accepted invasive evaluation for ACS.
CONCLUSIONS: We are unable to state that POCUS is a valid surrogate measurement of ACS from this study but see this as a starting point to develop further research in this area. Further study will be needed to better define the utility of POCUS in patients envenomated by snakes throughout the world.
OBJECTIVE: To develop a decision-making program and analyze multi-institutional outcomes of RAC-IVCT versus RAT-IVCT.
DESIGN, SETTING, AND PARTICIPANTS: Ninety patients with renal cell carcinoma (RCC) with level II IVCT were included from eight Chinese urological centers, and underwent RAC-IVCT (30 patients) or RAT-IVCT (60 patients) from June 2013 to January 2019.
SURGICAL PROCEDURE: The surgical strategy was based on IVCT imaging characteristics. RAT-IVCT was performed with standardized cavotomy, thrombectomy, and IVC reconstruction. RAC-IVCT was mainly performed in patients with extensive IVC wall invasion when the collateral blood vessels were well-established. For right-sided RCC, the IVC from the infrarenal vein to the infrahepatic veins was stapled. For left-sided RCC, the IVC from the suprarenal vein to the infrahepatic veins was removed and caudal IVC reconstruction was performed to ensure the right renal vein returned through the IVC collaterals.
MEASUREMENTS: Clinicopathological, operative, and survival outcomes were collected and analyzed.
RESULTS AND LIMITATIONS: All procedures were successfully performed without open conversion. The median operation time (268 vs 190 min) and estimated blood loss (1500 vs 400 ml) were significantly greater for RAC-IVCT versus RAT-IVCT (both p < 0.001). IVC invasion was a risk factor for progression-free and overall survival at midterm follow-up. Large-volume and long-term follow-up studies are needed.
CONCLUSIONS: RAC-IVCT or RAT-IVCT represents an alternative minimally invasive approach for selected RCC patients with level II IVCT. Selection of RAC-IVCT or RAT-IVCT is mainly based on preoperative IVCT imaging characteristics, including the presence of IVC wall invasion, the affected kidney, and establishment of the collateral circulation.
PATIENT SUMMARY: In this study we found that robotic surgeries for level II inferior vena cava thrombus were feasible and safe. Preoperative imaging played an important role in establishing an appropriate surgical plan.
METHODS: 28 experts from 11 countries reviewed the evidence and modified the statements using the Delphi method, with consensus level predefined as ≥80% of agreement on each statement. The Grading of Recommendation Assessment, Development and Evaluation (GRADE) approach was followed.
RESULTS: Consensus was reached in 26 statements. At an individual level, eradication of H. pylori reduces the risk of GC in asymptomatic subjects and is recommended unless there are competing considerations. In cohorts of vulnerable subjects (eg, first-degree relatives of patients with GC), a screen-and-treat strategy is also beneficial. H. pylori eradication in patients with early GC after curative endoscopic resection reduces the risk of metachronous cancer and calls for a re-examination on the hypothesis of 'the point of no return'. At the general population level, the strategy of screen-and-treat for H. pylori infection is most cost-effective in young adults in regions with a high incidence of GC and is recommended preferably before the development of atrophic gastritis and intestinal metaplasia. However, such a strategy may still be effective in people aged over 50, and may be integrated or included into national healthcare priorities, such as colorectal cancer screening programmes, to optimise the resources. Reliable locally effective regimens based on the principles of antibiotic stewardship are recommended. Subjects at higher risk of GC, such as those with advanced gastric atrophy or intestinal metaplasia, should receive surveillance endoscopy after eradication of H. pylori.
CONCLUSION: Evidence supports the proposal that eradication therapy should be offered to all individuals infected with H. pylori. Vulnerable subjects should be tested, and treated if the test is positive. Mass screening and eradication of H. pylori should be considered in populations at higher risk of GC.
AIM: To understand whether critical care nurses' critical thinking disposition affects their clinical decision-making skills.
METHOD: This was a cross-sectional study in which Malay and English translations of the Short Form-Critical Thinking Disposition Inventory-Chinese Version (SF-CTDI-CV) and the Clinical Decision-making Nursing Scale (CDMNS) were used to collect data from 113 nurses working in seven critical care units of a tertiary hospital on the east coast of Malaysia. Participants were recruited through purposive sampling in October 2015.
RESULTS: Critical care nurses perceived both their critical thinking disposition and decision-making skills to be high, with a total score of 71.5 and a mean of 48.55 for the SF-CTDI-CV, and a total score of 161 and a mean of 119.77 for the CDMNS. One-way ANOVA test results showed that while age, gender, ethnicity, education level and working experience factors significantly impacted critical thinking (p<0.05), only age and working experience significantly impacted clinical decision-making (p<0.05). Pearson's correlation analysis showed a strong and positive relationship between critical care nurses' critical thinking and clinical decision-making (r=0.637, p=0.001).
CONCLUSION: While this small-scale study has shown a relationship exists between critical care nurses' critical thinking disposition and clinical decision-making in one hospital, further investigation using the same measurement tools is needed into this relationship in diverse clinical contexts and with greater numbers of participants. Critical care nurses' perceived high level of critical thinking and decision-making also needs further investigation.
METHODS: Anonymised data consisting of 44 independent predictor variables from 355 adults diagnosed with COVID-19, at a UK hospital, was manually extracted from electronic patient records for retrospective, case-control analysis. Primary outcomes included inpatient mortality, required ventilatory support, and duration of inpatient treatment. Pulmonary embolism sequala was the only secondary outcome. After balancing data, key variables were feature selected for each outcome using random forests. Predictive models were then learned and constructed using Bayesian networks.
RESULTS: The proposed probabilistic models were able to predict, using feature selected risk factors, the probability of the mentioned outcomes. Overall, our findings demonstrate reliable, multivariable, quantitative predictive models for four outcomes, which utilise readily available clinical information for COVID-19 adult inpatients. Further research is required to externally validate our models and demonstrate their utility as risk stratification and clinical decision-making tools.
OBJECTIVE: This paper aimed to describe the development process of the COVID-19 Symptom Monitoring System (CoSMoS), which consists of a self-monitoring, algorithm-based Telegram bot and a teleconsultation system. We describe all the essential steps from the clinical perspective and our technical approach in designing, developing, and integrating the system into clinical practice during the COVID-19 pandemic as well as lessons learned from this development process.
METHODS: CoSMoS was developed in three phases: (1) requirement formation to identify clinical problems and to draft the clinical algorithm, (2) development testing iteration using the agile software development method, and (3) integration into clinical practice to design an effective clinical workflow using repeated simulations and role-playing.
RESULTS: We completed the development of CoSMoS in 19 days. In Phase 1 (ie, requirement formation), we identified three main functions: a daily automated reminder system for patients to self-check their symptoms, a safe patient risk assessment to guide patients in clinical decision making, and an active telemonitoring system with real-time phone consultations. The system architecture of CoSMoS involved five components: Telegram instant messaging, a clinician dashboard, system administration (ie, back end), a database, and development and operations infrastructure. The integration of CoSMoS into clinical practice involved the consideration of COVID-19 infectivity and patient safety.
CONCLUSIONS: This study demonstrated that developing a COVID-19 symptom monitoring system within a short time during a pandemic is feasible using the agile development method. Time factors and communication between the technical and clinical teams were the main challenges in the development process. The development process and lessons learned from this study can guide the future development of digital monitoring systems during the next pandemic, especially in developing countries.
BACKGROUND: The relationship between critical care nurses' decision-making and leadership styles in hospitals has been widely studied, but the influence of cognitive bias on decision-making and leadership styles in critical care environments remains poorly understood, particularly in Jordan.
DESIGN: Two-phase mixed methods sequential explanatory design and grounded theory.
SETTING: critical care unit, Prince Hamza Hospital, Jordan. Participant sampling: convenience sampling Phase 1 (quantitative, n = 96), purposive sampling Phase 2 (qualitative, n = 20).
METHODS: Pilot tested quantitative survey of 96 critical care nurses in 2012. Qualitative in-depth interviews, informed by quantitative results, with 20 critical care nurses in 2013. Descriptive and simple linear regression quantitative data analyses. Thematic (constant comparative) qualitative data analysis.
RESULTS: Quantitative - correlations found between rationality and cognitive bias, rationality and task-oriented leadership styles, cognitive bias and democratic communication styles and cognitive bias and task-oriented leadership styles. Qualitative - 'being competent', 'organizational structures', 'feeling self-confident' and 'being supported' in the work environment identified as key factors influencing critical care nurses' cognitive bias in decision-making and leadership styles. Two-way impact (strengthening and weakening) of cognitive bias in decision-making and leadership styles on critical care nurses' practice performance.
CONCLUSION: There is a need to heighten critical care nurses' consciousness of cognitive bias in decision-making and leadership styles and its impact and to develop organization-level strategies to increase non-biased decision-making.