METHODS AND FINDINGS: We used data on suicides by gases other than domestic gas for Hong Kong, Japan, the Republic of Korea, Taiwan, and Singapore in the years 1995/1996-2011. Similar data for Malaysia, the Philippines, and Thailand were also extracted but were incomplete. Graphical and joinpoint regression analyses were used to examine time trends in suicide, and negative binomial regression analysis to study sex- and age-specific patterns. In 1995/1996, charcoal-burning suicides accounted for <1% of all suicides in all study countries, except in Japan (5%), but they increased to account for 13%, 24%, 10%, 7%, and 5% of all suicides in Hong Kong, Taiwan, Japan, the Republic of Korea, and Singapore, respectively, in 2011. Rises were first seen in Hong Kong after 1998 (95% CI 1997-1999), followed by Singapore in 1999 (95% CI 1998-2001), Taiwan in 2000 (95% CI 1999-2001), Japan in 2002 (95% CI 1999-2003), and the Republic of Korea in 2007 (95% CI 2006-2008). No marked increases were seen in Malaysia, the Philippines, or Thailand. There was some evidence that charcoal-burning suicides were associated with an increase in overall suicide rates in Hong Kong, Taiwan, and Japan (for females), but not in Japan (for males), the Republic of Korea, and Singapore. Rates of change in charcoal-burning suicide rate did not differ by sex/age group in Taiwan and Hong Kong but appeared to be greatest in people aged 15-24 y in Japan and people aged 25-64 y in the Republic of Korea. The lack of specific codes for charcoal-burning suicide in the International Classification of Diseases and variations in coding practice in different countries are potential limitations of this study.
CONCLUSIONS: Charcoal-burning suicides increased markedly in some East/Southeast Asian countries (Hong Kong, Taiwan, Japan, the Republic of Korea, and Singapore) in the first decade of the 21st century, but such rises were not experienced by all countries in the region. In countries with a rise in charcoal-burning suicide rates, the timing, scale, and sex/age pattern of increases varied by country. Factors underlying these variations require further investigation, but may include differences in culture or in media portrayals of the method. Please see later in the article for the Editors' Summary.
METHODS AND FINDINGS: Our approach is based on a parsimonious mathematical model of disease transmission and only requires data collected through routine surveillance and standard case investigations. We apply it to assess the transmissibility of swine-origin influenza A H3N2v-M virus in the US, Nipah virus in Malaysia and Bangladesh, and also present a non-zoonotic example (cholera in the Dominican Republic). Estimation is based on two simple summary statistics, the proportion infected by the natural reservoir among detected cases (G) and among the subset of the first detected cases in each cluster (F). If detection of a case does not affect detection of other cases from the same cluster, we find that R can be estimated by 1-G; otherwise R can be estimated by 1-F when the case detection rate is low. In more general cases, bounds on R can still be derived.
CONCLUSIONS: We have developed a simple approach with limited data requirements that enables robust assessment of the risks posed by emerging zoonoses. We illustrate this by deriving transmissibility estimates for the H3N2v-M virus, an important step in evaluating the possible pandemic threat posed by this virus. Please see later in the article for the Editors' Summary.
METHODS AND FINDINGS: The association of metabolically defined body size phenotypes with colorectal cancer was investigated in a case-control study nested within the European Prospective Investigation into Cancer and Nutrition (EPIC) study. Metabolic health/body size phenotypes were defined according to hyperinsulinaemia status using serum concentrations of C-peptide, a marker of insulin secretion. A total of 737 incident colorectal cancer cases and 737 matched controls were divided into tertiles based on the distribution of C-peptide concentration amongst the control population, and participants were classified as metabolically healthy if below the first tertile of C-peptide and metabolically unhealthy if above the first tertile. These metabolic health definitions were then combined with body mass index (BMI) measurements to create four metabolic health/body size phenotype categories: (1) metabolically healthy/normal weight (BMI < 25 kg/m2), (2) metabolically healthy/overweight (BMI ≥ 25 kg/m2), (3) metabolically unhealthy/normal weight (BMI < 25 kg/m2), and (4) metabolically unhealthy/overweight (BMI ≥ 25 kg/m2). Additionally, in separate models, waist circumference measurements (using the International Diabetes Federation cut-points [≥80 cm for women and ≥94 cm for men]) were used (instead of BMI) to create the four metabolic health/body size phenotype categories. Statistical tests used in the analysis were all two-sided, and a p-value of <0.05 was considered statistically significant. In multivariable-adjusted conditional logistic regression models with BMI used to define adiposity, compared with metabolically healthy/normal weight individuals, we observed a higher colorectal cancer risk among metabolically unhealthy/normal weight (odds ratio [OR] = 1.59, 95% CI 1.10-2.28) and metabolically unhealthy/overweight (OR = 1.40, 95% CI 1.01-1.94) participants, but not among metabolically healthy/overweight individuals (OR = 0.96, 95% CI 0.65-1.42). Among the overweight individuals, lower colorectal cancer risk was observed for metabolically healthy/overweight individuals compared with metabolically unhealthy/overweight individuals (OR = 0.69, 95% CI 0.49-0.96). These associations were generally consistent when waist circumference was used as the measure of adiposity. To our knowledge, there is no universally accepted clinical definition for using C-peptide level as an indication of hyperinsulinaemia. Therefore, a possible limitation of our analysis was that the classification of individuals as being hyperinsulinaemic-based on their C-peptide level-was arbitrary. However, when we used quartiles or the median of C-peptide, instead of tertiles, as the cut-point of hyperinsulinaemia, a similar pattern of associations was observed.
CONCLUSIONS: These results support the idea that individuals with the metabolically healthy/overweight phenotype (with normal insulin levels) are at lower colorectal cancer risk than those with hyperinsulinaemia. The combination of anthropometric measures with metabolic parameters, such as C-peptide, may be useful for defining strata of the population at greater risk of colorectal cancer.
METHODS AND FINDINGS: We reviewed all GenBank submissions of HIV-1 reverse transcriptase sequences with or without protease and identified 287 studies published between March 1, 2000, and December 31, 2013, with more than 25 recently or chronically infected ARV-naïve individuals. These studies comprised 50,870 individuals from 111 countries. Each set of study sequences was analyzed for phylogenetic clustering and the presence of 93 surveillance drug-resistance mutations (SDRMs). The median overall TDR prevalence in sub-Saharan Africa (SSA), south/southeast Asia (SSEA), upper-income Asian countries, Latin America/Caribbean, Europe, and North America was 2.8%, 2.9%, 5.6%, 7.6%, 9.4%, and 11.5%, respectively. In SSA, there was a yearly 1.09-fold (95% CI: 1.05-1.14) increase in odds of TDR since national ARV scale-up attributable to an increase in non-nucleoside reverse transcriptase inhibitor (NNRTI) resistance. The odds of NNRTI-associated TDR also increased in Latin America/Caribbean (odds ratio [OR] = 1.16; 95% CI: 1.06-1.25), North America (OR = 1.19; 95% CI: 1.12-1.26), Europe (OR = 1.07; 95% CI: 1.01-1.13), and upper-income Asian countries (OR = 1.33; 95% CI: 1.12-1.55). In SSEA, there was no significant change in the odds of TDR since national ARV scale-up (OR = 0.97; 95% CI: 0.92-1.02). An analysis limited to sequences with mixtures at less than 0.5% of their nucleotide positions—a proxy for recent infection—yielded trends comparable to those obtained using the complete dataset. Four NNRTI SDRMs—K101E, K103N, Y181C, and G190A—accounted for >80% of NNRTI-associated TDR in all regions and subtypes. Sixteen nucleoside reverse transcriptase inhibitor (NRTI) SDRMs accounted for >69% of NRTI-associated TDR in all regions and subtypes. In SSA and SSEA, 89% of NNRTI SDRMs were associated with high-level resistance to nevirapine or efavirenz, whereas only 27% of NRTI SDRMs were associated with high-level resistance to zidovudine, lamivudine, tenofovir, or abacavir. Of 763 viruses with TDR in SSA and SSEA, 725 (95%) were genetically dissimilar; 38 (5%) formed 19 sequence pairs. Inherent limitations of this study are that some cohorts may not represent the broader regional population and that studies were heterogeneous with respect to duration of infection prior to sampling.
CONCLUSIONS: Most TDR strains in SSA and SSEA arose independently, suggesting that ARV regimens with a high genetic barrier to resistance combined with improved patient adherence may mitigate TDR increases by reducing the generation of new ARV-resistant strains. A small number of NNRTI-resistance mutations were responsible for most cases of high-level resistance, suggesting that inexpensive point-mutation assays to detect these mutations may be useful for pre-therapy screening in regions with high levels of TDR. In the context of a public health approach to ARV therapy, a reliable point-of-care genotypic resistance test could identify which patients should receive standard first-line therapy and which should receive a protease-inhibitor-containing regimen.
METHODS AND FINDINGS: This prospective analysis included 471,495 adults from the European Prospective Investigation into Cancer and Nutrition (EPIC, 1992-2014, median follow-up: 15.3 y), among whom there were 49,794 incident cancer cases (main locations: breast, n = 12,063; prostate, n = 6,745; colon-rectum, n = 5,806). Usual food intakes were assessed with standardized country-specific diet assessment methods. The FSAm-NPS was calculated for each food/beverage using their 100-g content in energy, sugar, saturated fatty acid, sodium, fibres, proteins, and fruits/vegetables/legumes/nuts. The FSAm-NPS scores of all food items usually consumed by a participant were averaged to obtain the individual FSAm-NPS Dietary Index (DI) scores. Multi-adjusted Cox proportional hazards models were computed. A higher FSAm-NPS DI score, reflecting a lower nutritional quality of the food consumed, was associated with a higher risk of total cancer (HRQ5 versus Q1 = 1.07; 95% CI 1.03-1.10, P-trend < 0.001). Absolute cancer rates in those with high and low (quintiles 5 and 1) FSAm-NPS DI scores were 81.4 and 69.5 cases/10,000 person-years, respectively. Higher FSAm-NPS DI scores were specifically associated with higher risks of cancers of the colon-rectum, upper aerodigestive tract and stomach, lung for men, and liver and postmenopausal breast for women (all P < 0.05). The main study limitation is that it was based on an observational cohort using self-reported dietary data obtained through a single baseline food frequency questionnaire; thus, exposure misclassification and residual confounding cannot be ruled out.
CONCLUSIONS: In this large multinational European cohort, the consumption of food products with a higher FSAm-NPS score (lower nutritional quality) was associated with a higher risk of cancer. This supports the relevance of the FSAm-NPS as underlying nutrient profiling system for front-of-pack nutrition labels, as well as for other public health nutritional measures.
METHODS AND FINDINGS: The web-based Joint Asia Diabetes Evaluation (JADE) platform provides a protocol to guide data collection for issuing a personalized JADE report including risk categories (1-4, low-high), 5-year probabilities of cardiovascular-renal events, and trends and targets of 4 risk factors with tailored decision support. The JADE program is a prospective cohort study implemented in a naturalistic environment where patients underwent nurse-led structured evaluation (blood/urine/eye/feet) in public and private outpatient clinics and diabetes centers in Hong Kong. We retrospectively analyzed the data of 16,624 Han Chinese patients with type 2 diabetes who were enrolled in 2007-2015. In the public setting, the non-JADE group (n = 3,587) underwent structured evaluation for risk factors and complications only, while the JADE (n = 9,601) group received a JADE report with group empowerment by nurses. In a community-based, nurse-led, university-affiliated diabetes center (UDC), the JADE-Personalized (JADE-P) group (n = 3,436) received a JADE report, personalized empowerment, and annual telephone reminder for reevaluation and engagement. The primary composite outcome was time to the first occurrence of cardiovascular-renal diseases, all-site cancer, and/or death, based on hospitalization data censored on 30 June 2017. During 94,311 person-years of follow-up in 2007-2017, 7,779 primary events occurred. Compared with the JADE group (136.22 cases per 1,000 patient-years [95% CI 132.35-140.18]), the non-JADE group had higher (145.32 [95% CI 138.68-152.20]; P = 0.020) while the JADE-P group had lower event rates (70.94 [95% CI 67.12-74.91]; P < 0.001). The adjusted hazard ratios (aHRs) for the primary composite outcome were 1.22 (95% CI 1.15-1.30) and 0.70 (95% CI 0.66-0.75), respectively, independent of risk profiles, education levels, drug usage, self-care, and comorbidities at baseline. We reported consistent results in propensity-score-matched analyses and after accounting for loss to follow-up. Potential limitations include its nonrandomized design that precludes causal inference, residual confounding, and participation bias.
CONCLUSIONS: ICT-assisted integrated care was associated with a reduction in clinical events, including death in type 2 diabetes in public and private healthcare settings.
METHODS AND FINDINGS: We conducted a retrospective cohort study of trauma patients transported from the scene to hospitals by emergency medical service (EMS) from January 1, 2016, to November 30, 2018, using data from the Pan-Asia Trauma Outcomes Study (PATOS) database. Prehospital time intervals were categorized into response time (RT), scene to hospital time (SH), and total prehospital time (TPT). The outcomes were 30-day mortality and functional status at hospital discharge. Multivariable logistic regression was used to investigate the association of prehospital time and outcomes to adjust for factors including age, sex, mechanism and type of injury, Injury Severity Score (ISS), Revised Trauma Score (RTS), and prehospital interventions. Overall, 24,365 patients from 4 countries (645 patients from Japan, 16,476 patients from Korea, 5,358 patients from Malaysia, and 1,886 patients from Taiwan) were included in the analysis. Among included patients, the median age was 45 years (lower quartile [Q1]-upper quartile [Q3]: 25-62), and 15,498 (63.6%) patients were male. Median (Q1-Q3) RT, SH, and TPT were 20 (Q1-Q3: 12-39), 21 (Q1-Q3: 16-29), and 47 (Q1-Q3: 32-60) minutes, respectively. In all, 280 patients (1.1%) died within 30 days after injury. Prehospital time intervals were not associated with 30-day mortality. The adjusted odds ratios (aORs) per 10 minutes of RT, SH, and TPT were 0.99 (95% CI 0.92-1.06, p = 0.740), 1.08 (95% CI 1.00-1.17, p = 0.065), and 1.03 (95% CI 0.98-1.09, p = 0.236), respectively. However, long prehospital time was detrimental to functional survival. The aORs of RT, SH, and TPT per 10-minute delay were 1.06 (95% CI 1.04-1.08, p < 0.001), 1.05 (95% CI 1.01-1.08, p = 0.007), and 1.06 (95% CI 1.04-1.08, p < 0.001), respectively. The key limitation of our study is the missing data inherent to the retrospective design. Another major limitation is the aggregate nature of the data from different countries and unaccounted confounders such as in-hospital management.
CONCLUSIONS: Longer prehospital time was not associated with an increased risk of 30-day mortality, but it may be associated with increased risk of poor functional outcomes in injured patients. This finding supports the concept of the "golden hour" for trauma patients during prehospital care in the countries studied.
METHODS AND FINDINGS: We searched the major electronic databases Medline, Embase, and Google Scholar (January 1990-October 2018) without language restrictions. We included cohort studies on term pregnancies that provided estimates of stillbirths or neonatal deaths by gestation week. We estimated the additional weekly risk of stillbirth in term pregnancies that continued versus delivered at various gestational ages. We compared week-specific neonatal mortality rates by gestational age at delivery. We used mixed-effects logistic regression models with random intercepts, and computed risk ratios (RRs), odds ratios (ORs), and 95% confidence intervals (CIs). Thirteen studies (15 million pregnancies, 17,830 stillbirths) were included. All studies were from high-income countries. Four studies provided the risks of stillbirth in mothers of White and Black race, 2 in mothers of White and Asian race, 5 in mothers of White race only, and 2 in mothers of Black race only. The prospective risk of stillbirth increased with gestational age from 0.11 per 1,000 pregnancies at 37 weeks (95% CI 0.07 to 0.15) to 3.18 per 1,000 at 42 weeks (95% CI 1.84 to 4.35). Neonatal mortality increased when pregnancies continued beyond 41 weeks; the risk increased significantly for deliveries at 42 versus 41 weeks gestation (RR 1.87, 95% CI 1.07 to 2.86, p = 0.012). One additional stillbirth occurred for every 1,449 (95% CI 1,237 to 1,747) pregnancies that advanced from 40 to 41 weeks. Limitations include variations in the definition of low-risk pregnancy, the wide time span of the studies, the use of registry-based data, and potential confounders affecting the outcome.
CONCLUSIONS: Our findings suggest there is a significant additional risk of stillbirth, with no corresponding reduction in neonatal mortality, when term pregnancies continue to 41 weeks compared to delivery at 40 weeks.
SYSTEMATIC REVIEW REGISTRATION: PROSPERO CRD42015013785.
METHODS AND FINDINGS: Utilising the Asian Sudden Cardiac Death in Heart Failure (ASIAN-HF) registry (11 Asian regions including Taiwan, Hong Kong, China, India, Malaysia, Thailand, Singapore, Indonesia, Philippines, Japan, and Korea; 46 centres with enrolment between 1 October 2012 and 6 October 2016), we prospectively examined 5,964 patients with symptomatic HF (mean age 61.3 ± 13.3 years, 26% women, mean BMI 25.3 ± 5.3 kg/m2, 16% with HF with preserved ejection fraction [HFpEF; ejection fraction ≥ 50%]), among whom 2,051 also had waist-to-height ratio (WHtR) measurements (mean age 60.8 ± 12.9 years, 24% women, mean BMI 25.0 ± 5.2 kg/m2, 7% HFpEF). Patients were categorised by BMI quartiles or WHtR quartiles or 4 combined groups of BMI (low, <24.5 kg/m2 [lean], or high, ≥24.5 kg/m2 [obese]) and WHtR (low, <0.55 [thin], or high, ≥0.55 [fat]). Cox proportional hazards models were used to examine a 1-year composite outcome (HF hospitalisation or mortality). Across BMI quartiles, higher BMI was associated with lower risk of the composite outcome (ptrend < 0.001). Contrastingly, higher WHtR was associated with higher risk of the composite outcome. Individuals in the lean-fat group, with low BMI and high WHtR (13.9%), were more likely to be women (35.4%) and to be from low-income countries (47.7%) (predominantly in South/Southeast Asia), and had higher prevalence of diabetes (46%), worse quality of life scores (63.3 ± 24.2), and a higher rate of the composite outcome (51/232; 22%), compared to the other groups (p < 0.05 for all). Following multivariable adjustment, the lean-fat group had higher adjusted risk of the composite outcome (hazard ratio 1.93, 95% CI 1.17-3.18, p = 0.01), compared to the obese-thin group, with high BMI and low WHtR. Results were consistent across both HF subtypes (HFpEF and HF with reduced ejection fraction [HFrEF]; pinteraction = 0.355). Selection bias and residual confounding are potential limitations of such multinational observational registries.
CONCLUSIONS: In this cohort of Asian patients with HF, the 'obesity paradox' is observed only when defined using BMI, with WHtR showing the opposite association with the composite outcome. Lean-fat patients, with high WHtR and low BMI, have the worst outcomes. A direct correlation between high WHtR and the composite outcome is apparent in both HFpEF and HFrEF.
TRIAL REGISTRATION: Asian Sudden Cardiac Death in HF (ASIAN-HF) Registry ClinicalTrials.gov Identifier: NCT01633398.
METHODS AND FINDINGS: Genetic instruments to proxy 12 risk factors were constructed by identifying single nucleotide polymorphisms (SNPs) that were robustly (P < 5 × 10-8) and independently associated with each respective risk factor in previously reported genome-wide association studies. These risk factors included genetic liability to 3 factors (endometriosis, polycystic ovary syndrome, type 2 diabetes) scaled to reflect a 50% higher odds liability to disease. We obtained summary statistics for the association of these SNPs with risk of overall and histotype-specific invasive epithelial ovarian cancer (22,406 cases; 40,941 controls) and low malignant potential tumours (3,103 cases; 40,941 controls) from the Ovarian Cancer Association Consortium (OCAC). The OCAC dataset comprises 63 genotyping project/case-control sets with participants of European ancestry recruited from 14 countries (US, Australia, Belarus, Germany, Belgium, Denmark, Finland, Norway, Canada, Poland, UK, Spain, Netherlands, and Sweden). SNPs were combined into multi-allelic inverse-variance-weighted fixed or random effects models to generate effect estimates and 95% confidence intervals (CIs). Three complementary sensitivity analyses were performed to examine violations of MR assumptions: MR-Egger regression and weighted median and mode estimators. A Bonferroni-corrected P value threshold was used to establish strong evidence (P < 0.0042) and suggestive evidence (0.0042 < P < 0.05) for associations. In MR analyses, there was strong or suggestive evidence that 2 of the 12 risk factors were associated with invasive epithelial ovarian cancer and 8 of the 12 were associated with 1 or more invasive epithelial ovarian cancer histotypes. There was strong evidence that genetic liability to endometriosis was associated with an increased risk of invasive epithelial ovarian cancer (odds ratio [OR] per 50% higher odds liability: 1.10, 95% CI 1.06-1.15; P = 6.94 × 10-7) and suggestive evidence that lifetime smoking exposure was associated with an increased risk of invasive epithelial ovarian cancer (OR per unit increase in smoking score: 1.36, 95% CI 1.04-1.78; P = 0.02). In analyses examining histotypes and low malignant potential tumours, the strongest associations found were between height and clear cell carcinoma (OR per SD increase: 1.36, 95% CI 1.15-1.61; P = 0.0003); age at natural menopause and endometrioid carcinoma (OR per year later onset: 1.09, 95% CI 1.02-1.16; P = 0.007); and genetic liability to polycystic ovary syndrome and endometrioid carcinoma (OR per 50% higher odds liability: 0.89, 95% CI 0.82-0.96; P = 0.002). There was little evidence for an association of genetic liability to type 2 diabetes, parity, or circulating levels of 25-hydroxyvitamin D and sex hormone binding globulin with ovarian cancer or its subtypes. The primary limitations of this analysis include the modest statistical power for analyses of risk factors in relation to some less common ovarian cancer histotypes (low grade serous, mucinous, and clear cell carcinomas), the inability to directly examine the association of some ovarian cancer risk factors that did not have robust genetic variants available to serve as proxies (e.g., oral contraceptive use, hormone replacement therapy), and the assumption of linear relationships between risk factors and ovarian cancer risk.
CONCLUSIONS: Our comprehensive examination of possible aetiological drivers of ovarian carcinogenesis using germline genetic variants to proxy risk factors supports a role for few of these factors in invasive epithelial ovarian cancer overall and suggests distinct aetiologies across histotypes. The identification of novel risk factors remains an important priority for the prevention of epithelial ovarian cancer.
METHODS AND FINDINGS: We examined cross-sectional differences in MD by age and menopausal status in over 11,000 breast-cancer-free women aged 35-85 years, from 40 ethnicity- and location-specific population groups across 22 countries in the International Consortium on Mammographic Density (ICMD). MD was read centrally using a quantitative method (Cumulus) and its square-root metrics were analysed using meta-analysis of group-level estimates and linear regression models of pooled data, adjusted for body mass index, reproductive factors, mammogram view, image type, and reader. In all, 4,534 women were premenopausal, and 6,481 postmenopausal, at the time of mammography. A large age-adjusted difference in percent MD (PD) between post- and premenopausal women was apparent (-0.46 cm [95% CI: -0.53, -0.39]) and appeared greater in women with lower breast cancer risk profiles; variation across population groups due to heterogeneity (I2) was 16.5%. Among premenopausal women, the √PD difference per 10-year increase in age was -0.24 cm (95% CI: -0.34, -0.14; I2 = 30%), reflecting a compositional change (lower dense area and higher non-dense area, with no difference in breast area). In postmenopausal women, the corresponding difference in √PD (-0.38 cm [95% CI: -0.44, -0.33]; I2 = 30%) was additionally driven by increasing breast area. The study is limited by different mammography systems and its cross-sectional rather than longitudinal nature.
CONCLUSIONS: Declines in MD with increasing age are present premenopausally, continue postmenopausally, and are most pronounced over the menopausal transition. These effects were highly consistent across diverse groups of women worldwide, suggesting that they result from an intrinsic biological, likely hormonal, mechanism common to women. If cumulative breast density is a key determinant of breast cancer risk, younger ages may be the more critical periods for lifestyle modifications aimed at breast density and breast cancer risk reduction.
METHODS AND FINDINGS: This mixed method (qualitative and quantitative) study was conducted from 2018 to 2019 using a self-administered questionnaire among Muslim medical doctors from 2 main medical associations with a large number of Muslim members from all over Malaysia who attended their annual conference. For those doctors who did not attend the conference, the questionnaire was posted to them. Association A had 510 members, 64 male Muslim doctors and 333 female Muslim doctors. Association B only had Muslim doctors; 3,088 were female, and 1,323 were male. In total, 894 questionnaires were distributed either by hand or by post, and 366 completed questionnaires were received back. For the qualitative part of the study, a snowball sampling method was used, and 24 in-depth interviews were conducted using a semi-structured questionnaire, until data reached saturation. Quantitative data were analysed using SPSS version 18 (IBM, Armonk, NY). A chi-squared test and binary logistic regression were performed. The qualitative data were transcribed manually, organized, coded, and recoded using NVivo version 12. The clustered codes were elicited as common themes. Most of the respondents were women, had medical degrees from Malaysia, and had a postgraduate degree in Family Medicine. The median age was 42. Most were working with the Ministry of Health (MoH) Malaysia, and in a clinic located in an urban location. The prevalence of Muslim doctors practising FGC was 20.5% (95% CI 16.6-24.9). The main reason cited for practising FGC was religious obligation. Qualitative findings too showed that religion was a strong motivating factor for the practice and its continuation, besides culture and harm reduction. Although most Muslim doctors performed type IV FGC, there were a substantial number performing type I. Respondents who were women (adjusted odds ratio [aOR] 4.4, 95% CI 1.9-10.0. P ≤ 0.001), who owned a clinic (aOR 30.7, 95% CI 12.0-78.4. P ≤ 0.001) or jointly owned a clinic (aOR 7.61, 95% CI 3.2-18.1. P ≤ 0.001), who thought that FGC was legal in Malaysia (aOR 2.09, 95% CI 1.02-4.3. P = 0.04), and who were encouraged in religion (aOR 2.25, 95% CI 3.2-18.1. P = 0.036) and thought that FGC should continue (aOR 3.54, 95% CI 1.25-10.04. P = 0.017) were more likely to practice FGC. The main limitations of the study were the small sample size and low response rate.
CONCLUSIONS: In this study, we found that many of the Muslim doctors were unaware of the legal and international stand against FGC, and many wanted the practice to continue. It is a concern that type IV FGC carried out by traditional midwives may be supplanted and exacerbated by type I FGC performed by doctors, calling for strong and urgent action by the Malaysian medical authorities.
METHODS AND FINDINGS: We conducted 30 semistructured interviews with health policy-makers, health service providers, and other experts working in the United Nations (n = 6), ministries and public health (n = 5), international (n = 9) and national civil society (n = 7), and academia (n = 3) based in Indonesia (n = 6), Malaysia (n = 10), Myanmar (n = 6), and Thailand (n = 8). Data were analysed thematically using deductive and inductive coding. Interviewees described the cumulative nature of health risks at each migratory phase. Perceived barriers to addressing migrants' cumulative health needs were primarily financial, juridico-political, and sociocultural, whereas key facilitators were many health workers' humanitarian stance and positive national commitment to pursuing universal health coverage (UHC). Across all countries, financial constraints were identified as the main challenges in addressing the comprehensive health needs of refugees and asylum seekers. Participants recommended regional and multisectoral approaches led by national governments, recognising refugee and asylum-seeker contributions, and promoting inclusion and livelihoods. Main study limitations included that we were not able to include migrant voices or those professionals not already interested in migrants.
CONCLUSIONS: To our knowledge, this is one of the first qualitative studies to investigate the health concerns and barriers to access among migrants experiencing forced displacement, particularly refugees and asylum seekers, in Southeast Asia. Findings provide practical new insights with implications for informing policy and practice. Overall, sociopolitical inclusion of forcibly displaced populations remains difficult in these four countries despite their significant contributions to host-country economies.
METHODS AND FINDINGS: A search using Ovid MEDLINE and Embase was initially conducted to identify studies on severe Plasmodium falciparum malaria that included information on treatment delay, such as fever duration (inception to 22nd September 2017). Studies identified included 5 case-control and 8 other observational clinical studies of SM and UM cases. Risk of bias was assessed using the Newcastle-Ottawa scale, and all studies were ranked as 'Good', scoring ≥7/10. Individual-patient data (IPD) were pooled from 13 studies of 3,989 (94.1% aged <15 years) SM patients and 5,780 (79.6% aged <15 years) UM cases in Benin, Malaysia, Mozambique, Tanzania, The Gambia, Uganda, Yemen, and Zambia. Definitions of SM were standardised across studies to compare treatment delay in patients with UM and different SM phenotypes using age-adjusted mixed-effects regression. The odds of any SM phenotype were significantly higher in children with longer delays between initial symptoms and arrival at the health facility (odds ratio [OR] = 1.33, 95% CI: 1.07-1.64 for a delay of >24 hours versus ≤24 hours; p = 0.009). Reported illness duration was a strong predictor of presenting with severe malarial anaemia (SMA) in children, with an OR of 2.79 (95% CI:1.92-4.06; p < 0.001) for a delay of 2-3 days and 5.46 (95% CI: 3.49-8.53; p < 0.001) for a delay of >7 days, compared with receiving treatment within 24 hours from symptom onset. We estimate that 42.8% of childhood SMA cases and 48.5% of adult SMA cases in the study areas would have been averted if all individuals were able to access treatment within the first day of symptom onset, if the association is fully causal. In studies specifically recording onset of nonsevere symptoms, long treatment delay was moderately associated with other SM phenotypes (OR [95% CI] >3 to ≤4 days versus ≤24 hours: cerebral malaria [CM] = 2.42 [1.24-4.72], p = 0.01; respiratory distress syndrome [RDS] = 4.09 [1.70-9.82], p = 0.002). In addition to unmeasured confounding, which is commonly present in observational studies, a key limitation is that many severe cases and deaths occur outside healthcare facilities in endemic countries, where the effect of delayed or no treatment is difficult to quantify.
CONCLUSIONS: Our results quantify the relationship between rapid access to treatment and reduced risk of severe disease, which was particularly strong for SMA. There was some evidence to suggest that progression to other severe phenotypes may also be prevented by prompt treatment, though the association was not as strong, which may be explained by potential selection bias, sample size issues, or a difference in underlying pathology. These findings may help assess the impact of interventions that improve access to treatment.
METHODS AND FINDINGS: This is a retrospective cohort study of all adult people living with HIV (PLWH) incarcerated in Connecticut, US, during the period January 1, 2007, to December 31, 2011, and observed through December 31, 2014 (n = 1,094). Most cohort participants were unmarried (83.7%) men (77.0%) who were black or Hispanic (78.1%) and acquired HIV from injection drug use (72.6%). Prison-based pharmacy and custody databases were linked with community HIV surveillance monitoring and case management databases. Post-release RIC declined steadily over 3 years of follow-up (67.2% retained for year 1, 51.3% retained for years 1-2, and 42.5% retained for years 1-3). Compared with individuals who were not re-incarcerated, individuals who were re-incarcerated were more likely to meet RIC criteria (48% versus 34%; p < 0.001) but less likely to have VS (72% versus 81%; p = 0.048). Using multivariable logistic regression models (individual-level analysis for 1,001 individuals after excluding 93 deaths), both sustained RIC and VS at 3 years post-release were independently associated with older age (RIC: adjusted odds ratio [AOR] = 1.61, 95% CI = 1.22-2.12; VS: AOR = 1.37, 95% CI = 1.06-1.78), having health insurance (RIC: AOR = 2.15, 95% CI = 1.60-2.89; VS: AOR = 2.01, 95% CI = 1.53-2.64), and receiving an increased number of transitional case management visits. The same factors were significant when we assessed RIC and VS outcomes in each 6-month period using generalized estimating equations (for 1,094 individuals contributing 6,227 6-month periods prior to death or censoring). Additionally, receipt of antiretroviral therapy during incarceration (RIC: AOR = 1.33, 95% CI 1.07-1.65; VS: AOR = 1.91, 95% CI = 1.56-2.34), early linkage to care post-release (RIC: AOR = 2.64, 95% CI = 2.03-3.43; VS: AOR = 1.79; 95% CI = 1.45-2.21), and absolute time and proportion of follow-up time spent re-incarcerated were highly correlated with better treatment outcomes. Limited data were available on changes over time in injection drug use or other substance use disorders, psychiatric disorders, or housing status.
CONCLUSIONS: In a large cohort of CJ-involved PLWH with a 3-year post-release evaluation, RIC diminished significantly over time, but was associated with HIV care during incarceration, health insurance, case management services, and early linkage to care post-release. While re-incarceration and conditional release provide opportunities to engage in care, reducing recidivism and supporting community-based RIC efforts are key to improving longitudinal treatment outcomes among CJ-involved PLWH.