METHODS: A cross-sectional study was conducted between January and December 2012. A total of 350 adult patients in a teaching hospital were screened for risk of malnutrition using 3-MinNS and Subjective Global Assessment (SGA). To assess interrater reliability, each patient was screened for risk of malnutrition using 3-MinNS by 2 different nurses on 2 different occasions within 24 hours after admission. To assess the validity of 3-MinNS, the level of risk of malnutrition identified by the nurses using 3-MinNS was compared with the risk of malnutrition as assessed by a dietitian using SGA within 48 hours after the patients' enrolment into the study. The sensitivity, specificity, and predictive values were calculated in detecting patients at risk of malnutrition. Interrater reliability was determined using κ statistics.
RESULTS: Using SGA, the estimated prevalence of moderate to severe malnutrition was 36.3% (127/350). There was 94% proportional agreement between 2 nurses using 3-MinNS, and interrater reliability was substantial (κ = 0.79, P < .001). The analysis showed that 3-MinNS had moderate sensitivity (61.4%-68.5%) but high specificity (95.1%).
CONCLUSIONS: The 3-MinNS is a reliable and valid screening tool for use by healthcare professionals for identifying newly admitted medical and surgical patients who are at risk of malnutrition.
METHODS: We performed a cross-sectional study on KTRs with functioning renal allograft and at least 3 months post transplant. Dietary protein, salt, and dietary acid load were estimated using 24-hour urine collection. Demographic characteristics, concomitant medications, medical history, and laboratory results were obtained from electronic medical records.
RESULTS: A total of 204 KTRs were recruited with median age of 48 years (interquartile range [IQR], 18 years); male to female ratio was 61:39. A total of 79.9% (n = 163) were living related kidney transplants. The median duration after transplant was 71 months (IQR, 131 months), and median eGFR was 65 mL/min/1.73 m2 (IQR, 25 mL/min/1.73 m2). The prevalence rates of proteinuria (defined as ≥ 0.5 g/d) and metabolic acidosis (defined as at least 2 readings of serum bicarbonate ≤ 22 mmol/L in the past 6 months) were 17.7 % and 6.2%, respectively. High dietary protein of > 1.2 g/kg ideal body weight (adjusted odds ratio, 3.13; 95% CI, 1.35-7.28; P = .008) was significantly associated with proteinuria. Dietary protein, salt, and acid load did not correlate with chronic metabolic acidosis.
CONCLUSIONS: The prevalence rate of proteinuria is consistent with published literature, but metabolic acidosis rate is extremely low in our cohort. High protein intake (> 1.2 g/kg ideal body weight) is a risk factor of proteinuria and may have negative impact on KTR outcome.
METHODS: A decision tree model was developed based on literature and expert inputs. An epidemiological projection model was then added to the decision tree to calculate the target population size. The budget impact of adapting the different enteral nutrition (EN) formulas was calculated by multiplying the population size with the costs of the formula and ICU length of stay (LOS). A one-way sensitivity analysis (OWSA) was conducted to examine the effect each input parameter has on the calculated output.
RESULTS: Replacing SPF with SEF would lower ICU cost by MYR 1059 (USD 216) per patient. The additional cost of increased LOS due to EFI was MYR 5460 (USD 1114) per patient. If the MOH replaces SPF with SEF for ICU patients with high EFI risk (estimated 7981 patients in 2022), an annual net cost reduction of MYR 8.4 million (USD 1.7 million) could potentially be realized in the MOH system. The cost-reduction finding of replacing SPF with SEF remained unchanged despite the input uncertainties assessed via OWSA.
CONCLUSION: Early use of SEF in ICU patients with high EFI risk could potentially lower the cost of ICU care for the MOH system in Malaysia.
DESIGN: This was a single-center prospective observational study that compared resting energy expenditure estimated by 15 commonly used predictive equations against resting energy expenditure measured by indirect calorimetry at different phases. Degree of agreement between resting energy expenditure calculated by predictive equations and resting energy expenditure measured by indirect calorimetry was analyzed using intraclass correlation coefficient and Bland-Altman analyses. Resting energy expenditure values calculated from predictive equations differing by ± 10% from resting energy expenditure measured by indirect calorimetry was used to assess accuracy. A score ranking method was developed to determine the best predictive equations.
SETTING: General Intensive Care Unit, University of Malaya Medical Centre.
PATIENTS: Mechanically ventilated critically ill patients.
INTERVENTIONS: None.
MEASUREMENTS AND MAIN RESULTS: Indirect calorimetry was measured thrice during acute, late, and chronic phases among 305, 180, and 91 ICU patients, respectively. There were significant differences (F= 3.447; p = 0.034) in mean resting energy expenditure measured by indirect calorimetry among the three phases. Pairwise comparison showed mean resting energy expenditure measured by indirect calorimetry in late phase (1,878 ± 517 kcal) was significantly higher than during acute phase (1,765 ± 456 kcal) (p = 0.037). The predictive equations with the best agreement and accuracy for acute phase was Swinamer (1990), for late phase was Brandi (1999) and Swinamer (1990), and for chronic phase was Swinamer (1990). None of the resting energy expenditure calculated from predictive equations showed very good agreement or accuracy.
CONCLUSIONS: Predictive equations tend to either over- or underestimate resting energy expenditure at different phases. Predictive equations with "dynamic" variables and respiratory data had better agreement with resting energy expenditure measured by indirect calorimetry compared with predictive equations developed for healthy adults or predictive equations based on "static" variables. Although none of the resting energy expenditure calculated from predictive equations had very good agreement, Swinamer (1990) appears to provide relatively good agreement across three phases and could be used to predict resting energy expenditure when indirect calorimetry is not available.
METHODS: Thirty six patients with head injury admitted to neurosurgical ICU in University Malaya Medical Centre were recruited for this study, over a 6-month period from July 2014 to January 2015. Patients were randomized to receive either an immunonutrition (Group A) or a standard (Group B) enteral feed. Levels of biomarkers were measured at day 1, 5 and 7 of enteral feeding.
RESULTS: Patients in Group A showed significant reduction of IL-6 at day 5 (p
METHODS: Using indirect calorimetry, REE was measured at acute (≤5 days; n = 294) and late (≥6 days; n = 180) phases of intensive care unit admission. PEs were developed by multiple linear regression. A multi-fold cross-validation approach was used to validate the PEs. The best PEs were selected based on the highest coefficient of determination (R2), the lowest root mean square error (RMSE) and the lowest standard error of estimate (SEE). Two PEs developed from paired 168-patient data were compared with measured REE using mean absolute percentage difference.
RESULTS: Mean absolute percentage difference between predicted and measured REE was <20%, which is not clinically significant. Thus, a single PE was developed and validated from data of the larger sample size measured in the acute phase. The best PE for REE (kcal/day) was 891.6(Height) + 9.0(Weight) + 39.7(Minute Ventilation)-5.6(Age) - 354, with R2 = 0.442, RMSE = 348.3, SEE = 325.6 and mean absolute percentage difference with measured REE was: 15.1 ± 14.2% [acute], 15.0 ± 13.1% [late].
CONCLUSIONS: Separate PEs for acute and late phases may not be necessary. Thus, we have developed and validated a PE from acute phase data and demonstrated that it can provide optimal estimates of REE for patients in both acute and late phases.
TRIAL REGISTRATION: ClinicalTrials.gov NCT03319329.