METHODS: This retrospective study was performed on all KTRs ≥18 years of age at our center from January 1, 2006 to December 31, 2015, who were prescribed diltiazem as tacrolimus-sparing agent. Blood tacrolimus trough level (TacC0) and other relevant clinical data for 70 eligible KTRs were reviewed.
RESULTS: The dose of 1 mg tacrolimus resulted in a median TacC0 of 0.83 ± 0.52 ng/mL. With the introduction of a 90-mg/d dose diltiazem, there was a significant TacC0 increase to 1.39 ± 1.31 ng/mL/mg tacrolimus (P < .01). A further 90-mg increase in diltiazem to 180 mg/d resulted in a further increase of TacC0 to 1.66 ± 2.58 ng/mL/mg tacrolimus (P = .01). After this, despite a progressive increment of every 90-mg/d dose diltiazem to 270 mg/d and 360 mg/d, there was no further increment in TacC0 (1.44 ± 1.15 ng/mL/mg tacrolimus and 1.24 ± 0.94 ng/mL/mg tacrolimus, respectively [P < .01]). Addition of 180 mg/d diltiazem reduced the required tacrolimus dose to 4 mg/d, resulting in a cost-savings of USD 2045.92 per year (per patient) at our center. Adverse effects reported within 3 months of diltiazem introduction were bradycardia (1.4%) and postural hypotension (1.4%), which resolved after diltiazem dose reduction.
CONCLUSION: Coadministration of tacrolimus and diltiazem in KTRs appeared to be safe and resulted in a TacC0 increment until reaching a 180-mg/d total diltiazem dose, at which point it began to decrease. This approach will result in a marked savings in immunosuppression costs among KTRs in Malaysia.
METHODS: This is a retrospective analysis of the United Network for Organ Sharing registry data of LT recipients from January 1, 2000, to December 31, 2021. Outcomes analysis was performed using Cox proportional model for all-cause mortality and graft failure. Confounding was reduced by coarsened exact matching causal inference analysis.
RESULTS: Of 66 960 donors identified, 7178 (10.7%) had diabetes. Trend analysis revealed a longitudinal increase in the prevalence of donor diabetes ( P
Methods: We collected 3794 corneal images from 542 eyes of 280 subjects and developed seven deep learning models based on anterior and posterior eccentricity, anterior and posterior elevation, anterior and posterior sagittal curvature, and corneal thickness maps to extract deep corneal features. An independent subset with 1050 images collected from 150 eyes of 85 subjects from a separate center was used to validate models. We developed a hybrid deep learning model to detect KCN. We visualized deep features of corneal parameters to assess the quality of learning subjectively and computed area under the receiver operating characteristic curve (AUC), confusion matrices, accuracy, and F1 score to evaluate models objectively.
Results: In the development dataset, 204 eyes were normal, 123 eyes were suspected KCN, and 215 eyes had KCN. In the independent validation dataset, 50 eyes were normal, 50 eyes were suspected KCN, and 50 eyes were KCN. Images were annotated by three corneal specialists. The AUC of the models for the two-class and three-class problems based on the development set were 0.99 and 0.93, respectively.
Conclusions: The hybrid deep learning model achieved high accuracy in identifying KCN based on corneal maps and provided a time-efficient framework with low computational complexity.
Translational Relevance: Deep learning can detect KCN from non-invasive corneal images with high accuracy, suggesting potential application in research and clinical practice to identify KCN.
METHODS: VKA control was assessed retrospectively by time-in-the-therapeutic range (TTR) (Rosendaal method) and percentage INR-in-range (PINRR) in 991 White, Afro-Caribbean and South-Asian AF patients [overall mean (SD) age 71.6 (9.4) years; 55% male; mean (SD) CHA2DS2-VASc score 3.4 (1.6)] over a median (IQR) follow-up of 5.2 (3.2-7.0) years.
RESULTS: Compared to Whites, mean (SD) TTR and PINRR were significantly lower in South-Asians [TTR 67.9% vs. 60.5%; PINRR 58.8% vs. 51.6%, respectively] and Afro-Caribbeans [TTR 67.9% vs. 61.3%; PINRR 58.8% vs. 53.1%, respectively], despite similar INR monitoring intensity. Logistic regression revealed non-white ethnicity [OR 2.62; 95% Confidence Interval [CI] (1.67-4.10) and OR 3.47 (1.44-8.34)] and anaemia [OR 1.65 (1.00-2.70) and OR 6.27 (1.89-20.94)] as independent predictors of both TTR and PINRR
METHODS: We conducted an international, retrospective cohort study using 2019 and 2020 data from 11 national clinical quality registries covering 15 countries. Non-COVID-19 admissions in 2020 were compared with all admissions in 2019, prepandemic. The primary outcome was intensive care unit (ICU) mortality. Secondary outcomes included in-hospital mortality and standardised mortality ratio (SMR). Analyses were stratified by the country income level(s) of each registry.
FINDINGS: Among 1 642 632 non-COVID-19 admissions, there was an increase in ICU mortality between 2019 (9.3%) and 2020 (10.4%), OR=1.15 (95% CI 1.14 to 1.17, p<0.001). Increased mortality was observed in middle-income countries (OR 1.25 95% CI 1.23 to 1.26), while mortality decreased in high-income countries (OR=0.96 95% CI 0.94 to 0.98). Hospital mortality and SMR trends for each registry were consistent with the observed ICU mortality findings. The burden of COVID-19 was highly variable, with COVID-19 ICU patient-days per bed ranging from 0.4 to 81.6 between registries. This alone did not explain the observed non-COVID-19 mortality changes.
INTERPRETATION: Increased ICU mortality occurred among non-COVID-19 patients during the pandemic, driven by increased mortality in middle-income countries, while mortality decreased in high-income countries. The causes for this inequity are likely multi-factorial, but healthcare spending, policy pandemic responses, and ICU strain may play significant roles.
METHODS: A retrospective audit of heart transplant recipients (n = 87) treated with tacrolimus was performed. Relevant data were collected from the time of transplant to discharge. The concordance of tacrolimus dosing and monitoring according to hospital guidelines was assessed. The observed and software-predicted tacrolimus concentrations (n = 931) were compared for the first 3 weeks of oral immediate-release tacrolimus (Prograf) therapy, and the predictive performance (bias and imprecision) of the software was evaluated.
RESULTS: The majority (96%) of initial oral tacrolimus doses were guideline concordant. Most initial intravenous doses (93%) were lower than the guideline recommendations. Overall, 36% of initial tacrolimus doses were administered to transplant recipients with an estimated glomerular filtration rate of <60 mL/min/1.73 m despite recommendations to delay the commencement of therapy. Of the tacrolimus concentrations collected during oral therapy (n = 1498), 25% were trough concentrations obtained at steady-state. The software displayed acceptable predictions of tacrolimus concentration from day 12 (bias: -6%; 95%confidence interval, -11.8 to 2.5; imprecision: 16%; 95% confidence interval, 8.7-24.3) of therapy.
CONCLUSIONS: Tacrolimus dosing and monitoring were discordant with the guidelines. The Bayesian forecasting software was suitable for guiding tacrolimus dosing after 11 days of therapy in heart transplant recipients. Understanding the factors contributing to the variability in tacrolimus pharmacokinetics immediately after transplant may help improve software predictions.