METHODS: A convolutional auto-encoder (CAE) based nonlinear compression structure is implemented to reduce the signal size of arrhythmic beats. Long-short term memory (LSTM) classifiers are employed to automatically recognize arrhythmias using ECG features, which are deeply coded with the CAE network.
RESULTS: Based upon the coded ECG signals, both storage requirement and classification time were considerably reduced. In experimental studies conducted with the MIT-BIH arrhythmia database, ECG signals were compressed by an average 0.70% percentage root mean square difference (PRD) rate, and an accuracy of over 99.0% was observed.
CONCLUSIONS: One of the significant contributions of this study is that the proposed approach can significantly reduce time duration when using LSTM networks for data analysis. Thus, a novel and effective approach was proposed for both ECG signal compression, and their high-performance automatic recognition, with very low computational cost.
OBJECTIVE: This paper presents a rescue framework for the transfusion of the best CP to the most critical patients with COVID-19 on the basis of biological requirements by using machine learning and novel MCDM methods.
METHOD: The proposed framework is illustrated on the basis of two distinct and consecutive phases (i.e. testing and development). In testing, ABO compatibility is assessed after classifying donors into the four blood types, namely, A, B, AB and O, to indicate the suitability and safety of plasma for administration in order to refine the CP tested list repository. The development phase includes patient and donor sides. In the patient side, prioritisation is performed using a contracted patient decision matrix constructed between 'serological/protein biomarkers and the ratio of the partial pressure of oxygen in arterial blood to fractional inspired oxygen criteria' and 'patient list based on novel MCDM method known as subjective and objective decision by opinion score method'. Then, the patients with the most urgent need are classified into the four blood types and matched with a tested CP list from the test phase in the donor side. Thereafter, the prioritisation of CP tested list is performed using the contracted CP decision matrix.
RESULT: An intelligence-integrated concept is proposed to identify the most appropriate CP for corresponding prioritised patients with COVID-19 to help doctors hasten treatments.
DISCUSSION: The proposed framework implies the benefits of providing effective care and prevention of the extremely rapidly spreading COVID-19 from affecting patients and the medical sector.
METHODS: This was a data review involving children aged
METHODS: Various databases were used to search relevant articles since 1995. Studies included were cohort and cross-sectional studies, all patients with dengue infection and must report the number of death or case fatality rate. The Joanna Briggs Institute appraisal checklist was used to evaluate the risk of bias of the full-texts. The studies were grouped according to the classification adopted: WHO 1997 and WHO 2009. Meta-regression was employed using a logistic transformation (log-odds) of the case fatality rate. The result of the meta-regression was the adjusted case fatality rate and odds ratio on the explanatory variables.
RESULTS: A total of 77 studies were included in the meta-regression analysis. The case fatality rate for all studies combined was 1.14% with 95% confidence interval (CI) of 0.82-1.58%. The combined (unadjusted) case fatality rate for 69 studies which adopted WHO 1997 dengue case classification was 1.09% with 95% CI of 0.77-1.55%; and for eight studies with WHO 2009 was 1.62% with 95% CI of 0.64-4.02%. The unadjusted and adjusted odds ratio of case fatality using WHO 2009 dengue case classification was 1.49 (95% CI: 0.52, 4.24) and 0.83 (95% CI: 0.26, 2.63) respectively, compared to WHO 1997 dengue case classification. There was an apparent increase in trend of case fatality rate from the year 1992-2016. Neither was statistically significant.
CONCLUSIONS: The WHO 2009 dengue case classification might have no effect towards the case fatality rate although the adjusted results indicated a lower case fatality rate. Future studies are required for an update in the meta-regression analysis to confirm the findings.
METHODS: We extracted sales volume data for 39 anti-cancer medicines from the IQVIA database. We divided the total quantity sold by the reference defined daily dose to estimate the total number of defined daily doses sold, per country per year, for three types of anti-cancer therapies (traditional chemotherapy, targeted therapy and endocrine therapy). We adjusted these data by the number of new cancer cases in each country for each year.
FINDINGS: We observed an increase in sales across all types of anti-cancer therapies in all countries. The largest number of defined daily doses of traditional chemotherapy per new cancer case was sold in Thailand; however, the largest relative increase per new cancer case occurred in Indonesia (9.48-fold). The largest absolute and relative increases in sales of defined daily doses of targeted therapies per new cancer case occurred in Kazakhstan. Malaysia sold the largest number of adjusted defined daily doses of endocrine therapies in 2017, while China and Indonesia more than doubled their adjusted sales volumes between 2007 and 2017.
CONCLUSION: The use of sales data can fill an important knowledge gap in the use of anti-cancer medicines, particularly during periods of insurance coverage expansion. Combined with other data, sales volume data can help to monitor efforts to improve equitable access to essential medicines.
METHODS: Patients enrolled in the TREAT Asia HIV Observational Database cohort and on cART for more than six months were analysed. Comorbidities included hypertension, diabetes, dyslipidaemia and impaired renal function. Treatment outcomes of patients ≥50 years of age with comorbidities were compared with those <50 years and those ≥50 years without comorbidities. We analysed 5411 patients with virological failure and 5621 with immunologic failure. Our failure outcomes were defined to be in-line with the World Health Organization 2016 guidelines. Cox regression analysis was used to analyse time to first virological and immunological failure.
RESULTS: The incidence of virologic failure was 7.72/100 person-years. Virological failure was less likely in patients with better adherence and higher CD4 count at cART initiation. Those acquiring HIV through intravenous drug use were more likely to have virological failure compared to those infected through heterosexual contact. On univariate analysis, patients aged <50 years without comorbidities were more likely to experience virological failure than those aged ≥50 years with comorbidities (hazard ratio 1.75, 95% confidence interval (CI) 1.31 to 2.33, p
OBJECTIVES: To evaluate etiologic factors associated with spinal cord injury (SCI) severity and to identify predictive factors of reduction in SCI severity in six countries.
SETTING: SCI centers in Bangladesh, India, Malaysia, Nepal, Sri Lanka, and Thailand.
METHODS: Data from centers collected between October 2015 and February 2021 were analyzed using descriptive statistics and logistic regression.
RESULTS: Among 2634 individuals, the leading cause of SCIs was falls (n = 1410, 54%); most occurred from ≥1 meter (n = 1078). Most single-level neurological injuries occurred in the thoracic region (n = 977, 39%). Greater than half of SCIs (n = 1423, 54%) were graded American Spinal Injury Association Impairment Scale (AIS) A. Thoracic SCIs accounted for 53% (n = 757) of all one-level AIS A SCIs. The percentage of thoracic SCIs graded AIS A (78%) was significantly higher than high cervical (52%), low cervical (48%), lumbar (24%), and sacral (31%) SCIs (p
METHODS: Blips were defined as detectable VL (≥ 50 copies/mL) preceded and followed by undetectable VL (<50 copies/mL). Virological failure (VF) was defined as two consecutive VL ≥50 copies/ml. Cox proportional hazard models of time to first VF after entry, were developed.
RESULTS: 5040 patients (AHOD n = 2597 and TAHOD n = 2521) were included; 910 (18%) of patients experienced blips. 744 (21%) and 166 (11%) of high- and middle/low-income participants, respectively, experienced blips ever. 711 (14%) experienced blips prior to virological failure. 559 (16%) and 152 (10%) of high- and middle/low-income participants, respectively, experienced blips prior to virological failure. VL testing occurred at a median frequency of 175 and 91 days in middle/low- and high-income sites, respectively. Longer time to VF occurred in middle/low income sites, compared with high-income sites (adjusted hazards ratio (AHR) 0.41; p<0.001), adjusted for year of first cART, Hepatitis C co-infection, cART regimen, and prior blips. Prior blips were not a significant predictor of VF in univariate analysis (AHR 0.97, p = 0.82). Differing magnitudes of blips were not significant in univariate analyses as predictors of virological failure (p = 0.360 for blip 50-≤1000, p = 0.309 for blip 50-≤400 and p = 0.300 for blip 50-≤200). 209 of 866 (24%) patients were switched to an alternate regimen in the setting of a blip.
CONCLUSION: Despite a lower proportion of blips occurring in low/middle-income settings, no significant difference was found between settings. Nonetheless, a substantial number of participants were switched to alternative regimens in the setting of blips.
METHOD: A set of three psychophysics conditions of hearing (critical band spectral estimation, equal loudness hearing curve, and the intensity loudness power law of hearing) is used to estimate the auditory spectrum. The auditory spectrum and all-pole models of the auditory spectrums are computed and analyzed and used in a Gaussian mixture model for an automatic decision.
RESULTS: In the experiments using the Massachusetts Eye & Ear Infirmary database, an ACC of 99.56% is obtained for pathology detection, and an ACC of 93.33% is obtained for the pathology classification system. The results of the proposed systems outperform the existing running-speech-based systems.
DISCUSSION: The developed system can effectively be used in voice pathology detection and classification systems, and the proposed features can visually differentiate between normal and pathological samples.
METHODS: HIV+ patients from the Australian HIV Observational Database (AHOD) and the TREAT Asia HIV Observational Database (TAHOD) meeting specific criteria were included. In these analyses Asian and Caucasian status were defined by cohort. Factors associated with a low CD4:CD8 ratio (cutoff <0.2) prior to ART commencement, and with achieving a normal CD4:CD8 ratio (>1) at 12 and 24 months post ART commencement were assessed using logistic regression.
RESULTS: There were 591 patients from AHOD and 2,620 patients from TAHOD who met the inclusion criteria. TAHOD patients had a significantly (P<0.001) lower odds of having a baseline (prior to ART initiation) CD4:CD8 ratio greater than 0.2. After 12 months of ART, AHOD patients were more than twice as likely to achieve a normal CD4:CD8 ratio compared to TAHOD patients (15% versus 6%). However, after adjustment for confounding factors there was no significant difference between cohorts in the odds of achieving a CD4:CD8 ratio >1 (P=0.475).
CONCLUSIONS: We found a significantly lower CD4:CD8 ratio prior to commencing ART in TAHOD compared to AHOD even after adjusting for confounders. However, after adjustment, there was no significant difference between the cohorts in odds of achieving normal ratio. Baseline CD4+ and CD8+ counts seem to be the main driver for this difference between these two populations.
METHODS: This was a retrospective, non-interventional, cohort study using data from a Japanese medical claims database. Patients with glaucoma aged ≥20 years with a first drug claim for glaucoma treatment between 01 July 2005 and 30 October 2014 and with data for > 6 months before and after this first prescription were included. The primary endpoint was duration of drug persistence among glaucoma patients with and without the use of fixed combination drugs in the year following initiation of second-line combination treatment.
RESULTS: Of 1403 patients included in the analysis, 364 (25.94%) received fixed combination drugs and 1039 (74.06%) received unfixed combination drugs as second-line treatment. Baseline characteristics were generally comparable between the groups. A total of 39.01% of patients on fixed combination drugs, compared with 41.67% of patients on unfixed combination drugs, persisted on their glaucoma drugs 12 months post second-index date. Median persistence durations for the fixed combination drugs and unfixed combination drugs groups were 6 (95% confidence interval [CI]: 5-8) and 7 months (95% CI 6-9), respectively. Patients who received prostaglandin analogs (PGAs) were the most persistent with their treatment (n = 99, 12.84%). Patients diagnosed with primary open-angle glaucoma were less likely to experience treatment modification (hazard ratio [HR]: 0.800, 95% CI 0.649-0.986, P = 0.036), while those diagnosed with secondary glaucoma were more likely to experience treatment modification (HR: 1.678, 95% CI 1.231-2.288, P = 0.001) compared with glaucoma suspects.
CONCLUSIONS: In this retrospective claims database study, the persistence rate of second-line glaucoma combination treatment was low, with no difference in persistence between glaucoma patients receiving unfixed combination drugs compared with fixed combination drugs. Patients on PGA showed greater persistence rates compared with other treatments.