METHODS: The HIV-CAUSAL Collaboration consisted of 12 cohorts from the United States and Europe of HIV-positive, ART-naive, AIDS-free individuals aged ≥18 years with baseline CD4 cell count and HIV RNA levels followed up from 1996 through 2007. We estimated hazard ratios (HRs) for cART versus no cART, adjusted for time-varying CD4 cell count and HIV RNA level via inverse probability weighting.
RESULTS: Of 65 121 individuals, 712 developed tuberculosis over 28 months of median follow-up (incidence, 3.0 cases per 1000 person-years). The HR for tuberculosis for cART versus no cART was 0.56 (95% confidence interval [CI], 0.44-0.72) overall, 1.04 (95% CI, 0.64-1.68) for individuals aged >50 years, and 1.46 (95% CI, 0.70-3.04) for people with a CD4 cell count of <50 cells/μL. Compared with people who had not started cART, HRs differed by time since cART initiation: 1.36 (95% CI, 0.98-1.89) for initiation <3 months ago and 0.44 (95% CI, 0.34-0.58) for initiation ≥3 months ago. Compared with people who had not initiated cART, HRs <3 months after cART initiation were 0.67 (95% CI, 0.38-1.18), 1.51 (95% CI, 0.98-2.31), and 3.20 (95% CI, 1.34-7.60) for people <35, 35-50, and >50 years old, respectively, and 2.30 (95% CI, 1.03-5.14) for people with a CD4 cell count of <50 cells/μL.
CONCLUSIONS: Tuberculosis incidence decreased after cART initiation but not among people >50 years old or with CD4 cell counts of <50 cells/μL. Despite an overall decrease in tuberculosis incidence, the increased rate during 3 months of ART suggests unmasking IRIS.
OBJECTIVE: To assess the association of nuts with mortality and cardiovascular disease (CVD).
METHODS: The Prospective Urban Rural Epidemiology study is a large multinational prospective cohort study of adults aged 35-70 y from 16 low-, middle-, and high-income countries on 5 continents. Nut intake (tree nuts and ground nuts) was measured at the baseline visit, using country-specific validated FFQs. The primary outcome was a composite of mortality or major cardiovascular event [nonfatal myocardial infarction (MI), stroke, or heart failure].
RESULTS: We followed 124,329 participants (age = 50.7 y, SD = 10.2; 41.5% male) for a median of 9.5 y. We recorded 10,928 composite events [deaths (n = 8,662) or major cardiovascular events (n = 5,979)]. Higher nut intake (>120 g per wk compared with <30 g per mo) was associated with a lower risk of the primary composite outcome of mortality or major cardiovascular event [multivariate HR (mvHR): 0.88; 95% CI: 0.80, 0.96; P-trend = 0.0048]. Significant reductions in total (mvHR: 0.77; 95% CI: 0.69, 0.87; P-trend <0.0001), cardiovascular (mvHR: 0.72; 95% CI: 0.56, 0.92; P-trend = 0.048), and noncardiovascular mortality (mvHR: 0.82; 95% CI: 0.70, 0.96; P-trend = 0.0046) with a trend to reduced cancer mortality (mvHR: 0.81; 95% CI: 0.65, 1.00; P-trend = 0.081) were observed. No significant associations of nuts were seen with major CVD (mvHR: 0.91; 95% CI: 0.81, 1.02; P-trend = 0.14), stroke (mvHR: 0.98; 95% CI: 0.84, 1.14; P-trend = 0.76), or MI (mvHR: 0.86; 95% CI: 0.72, 1.04; P-trend = 0.29).
CONCLUSIONS: Higher nut intake was associated with lower mortality risk from both cardiovascular and noncardiovascular causes in low-, middle-, and high-income countries.
DATA DESCRIPTION: We conducted cross-sectional online surveys in six countries from March 2020 to April 2021. By the end of June 2021, there will be six waves of surveys for the United States and China, and four waves for the rest of countries. There are common sets of questions for all countries, however, some questions were adapted to reflect local situations and some questions were designed intentionally for specific countries to capture different COVID-19 mitigation actions. Participants were asked about their adherence towards countermeasures, risk perceptions, and acceptance of a hypothetical vaccine for COVID-19.
METHODS: Five graph models were fit using data from 1574 people who inject drugs in Hartford, CT, USA. We used a degree-corrected stochastic block model, based on goodness-of-fit, to model networks of injection drug users. We simulated transmission of HCV and HIV through this network with varying levels of HCV treatment coverage (0%, 3%, 6%, 12%, or 24%) and varying baseline HCV prevalence in people who inject drugs (30%, 60%, 75%, or 85%). We compared the effectiveness of seven treatment-as-prevention strategies on reducing HCV prevalence over 10 years and 20 years versus no treatment. The strategies consisted of treatment assigned to either a randomly chosen individual who injects drugs or to an individual with the highest number of injection partners. Additional strategies explored the effects of treating either none, half, or all of the injection partners of the selected individual, as well as a strategy based on respondent-driven recruitment into treatment.
FINDINGS: Our model estimates show that at the highest baseline HCV prevalence in people who inject drugs (85%), expansion of treatment coverage does not substantially reduce HCV prevalence for any treatment-as-prevention strategy. However, when baseline HCV prevalence is 60% or lower, treating more than 120 (12%) individuals per 1000 people who inject drugs per year would probably eliminate HCV within 10 years. On average, assigning treatment randomly to individuals who inject drugs is better than targeting individuals with the most injection partners. Treatment-as-prevention strategies that treat additional network members are among the best performing strategies and can enhance less effective strategies that target the degree (ie, the highest number of injection partners) within the network.
INTERPRETATION: Successful HCV treatment as prevention should incorporate the baseline HCV prevalence and will achieve the greatest benefit when coverage is sufficiently expanded.
FUNDING: National Institute on Drug Abuse.
METHODS: Using empirical data from Hartford, Connecticut, we deployed a stochastic block model to simulate an injection network of 1574 PWID. We used a susceptible-infected model for HCV and human immunodeficiency virus to evaluate the effectiveness of several HCV TasP strategies, including in combination with OAT and SSP scale-up, over 20 years.
RESULTS: At the highest HCV prevalence (75%), when OAT coverage is increased from 10% to 40%, combined with HCV treatment of 10% per year and SSP scale up to 40%, the time to achieve microelimination is reduced from 18.4 to 11.6 years. At the current HCV prevalence (60%), HCV TasP strategies as low as 10% coverage per year may achieve HCV microelimination within 10 years, with minimal impact from additional OAT scale-up. Strategies based on mass initial HCV treatment (50 per 100 PWID the first year followed by 5 per 100 PWID thereafter) were most effective in settings with HCV prevalence of 60% or lower.
CONCLUSIONS: Scale-up of HCV TasP is the most effective strategy for microelimination of HCV. OAT scale-up, however, scale-up may be synergistic toward achieving microelimination goals when HCV prevalence exceeds 60% and when HCV treatment coverage is 10 per 100 PWID per year or lower.
METHODS: The American Heart Association, through its Statistics Committee, continuously monitors and evaluates sources of data on heart disease and stroke in the United States to provide the most current information available in the annual Statistical Update. The 2021 Statistical Update is the product of a full year's worth of effort by dedicated volunteer clinicians and scientists, committed government professionals, and American Heart Association staff members. This year's edition includes data on the monitoring and benefits of cardiovascular health in the population, an enhanced focus on social determinants of health, adverse pregnancy outcomes, vascular contributions to brain health, the global burden of cardiovascular disease, and further evidence-based approaches to changing behaviors related to cardiovascular disease.
RESULTS: Each of the 27 chapters in the Statistical Update focuses on a different topic related to heart disease and stroke statistics.
CONCLUSIONS: The Statistical Update represents a critical resource for the lay public, policy makers, media professionals, clinicians, health care administrators, researchers, health advocates, and others seeking the best available data on these factors and conditions.
METHODS: This study used data from the Global COVID-19 Index provided by PEMANDU Associates. The sample, representing 161 countries, comprised the number of confirmed cases, deaths, stringency indices, population density and GNI per capita (USD). Correlation matrices were computed to reveal the association between the variables at three time points: day-30, day-60 and day-90. Three separate principal component analyses were computed for similar time points, and several standardized plots were produced.
RESULTS: Confirmed cases and deaths due to COVID-19 showed positive but weak correlation with stringency and GNI per capita. Through principal component analysis, the first two principal components captured close to 70% of the variance of the data. The first component can be viewed as the severity of the COVID-19 surge in countries, whereas the second component largely corresponded to population density, followed by GNI per capita of countries. Multivariate visualization of the two dominating principal components provided a standardized comparison of the situation in the161 countries, performed on day-30, day-60 and day-90 since the first confirmed cases in countries worldwide.
CONCLUSION: Visualization of the global spread of COVID-19 showed the unequal severity of the pandemic across continents and over time. Distinct patterns in clusters of countries, which separated many European countries from those in Africa, suggested a contrast in terms of stringency measures and wealth of a country. The African continent appeared to fare better in terms of the COVID-19 pandemic and the burden of mortality in the first 90 days. A noticeable worsening trend was observed in several countries in the same relative time frame of the disease's first 90 days, especially in the United States of America.
METHODS AND RESULTS: We queried the Centers for Disease Control and Prevention's Wide-Ranging Online Data for Epidemiologic Research database for data on patients with sarcoidosis aged ≥25 years from 1999 to 2020. Diseases of the circulatory system except ischemic heart disease were listed as the underlying cause of death, and sarcoidosis was stated as a contributing cause of death. We calculated age-adjusted mortality rate (AAMR) per 1 million individuals and determined the trends over time by estimating the annual percentage change using the Joinpoint Regression Program. Subgroup analyses were performed on the basis of demographic and geographic factors. In the 22-year study period, 3301 cardiovascular deaths with comorbid sarcoidosis were identified. The AAMR from cardiovascular deaths with comorbid sarcoidosis increased from 0.53 (95% CI, 0.43-0.65) per 1 million individuals in 1999 to 0.87 (95% CI, 0.75-0.98) per 1 million individuals in 2020. Overall, women recorded a higher AAMR compared with men (0.77 [95% CI, 0.74-0.81] versus 0.58 [95% CI, 0.55-0.62]). People with Black ancestry had higher AAMR than people with White ancestry (3.23 [95% CI, 3.07-3.39] versus 0.39 [95% CI, 0.37-0.41]). A higher percentage of death was seen in the age groups of 55 to 64 years in men (23.11%) and women (21.81%), respectively. In terms of US census regions, the South region has the highest AAMR from cardiovascular deaths with comorbid sarcoidosis compared with other regions (0.78 [95% CI, 0.74-0.82]).
CONCLUSIONS: The increase of AAMR from cardiovascular deaths with comorbid sarcoidosis and higher cardiovascular mortality rates among adults aged 55 to 64 years highlight the importance of early screening for cardiovascular diseases among patients with sarcoidosis.
METHODS AND RESULTS: We queried the Centers for Disease Control and Prevention's Wide-Ranging Online Data for Epidemiologic Research database among patients ≥15 years old from 1999 to 2020. VHD and its subtypes were listed as the underlying cause of death. We calculated age-adjusted mortality rate (AAMR) per 100 000 individuals and determined overall trends by estimating the average annual percent change using the Joinpoint regression program. Subgroup analyses were performed based on demographic and geographic factors. In the 22-year study, there were 446 096 VHD deaths, accounting for 0.80% of all-cause mortality (56 014 102 people) and 2.38% of the total cardiovascular mortality (18 759 451 people). Aortic stenosis recorded the highest mortality of VHD-related death in both male (109 529, 61.74%) and female (166 930, 62.13%) populations. The AAMR of VHD has declined from 8.4 (95% CI, 8.2-8.5) to 6.6 (95% CI, 6.5-6.7) per 100 000 population. Similar decreasing AAMR trends were also seen for the VHD subtypes. Men recorded higher AAMR for aortic stenosis and aortic regurgitation, whereas women had higher AAMR for mitral stenosis and mitral regurgitation. Mitral regurgitation had the highest change in average annual percent change in AAMR.
CONCLUSIONS: The mortality rate of VHD among the US population has declined over the past 2 decades. This highlights the likely efficacy of increasing surveillance and advancement in the management of VHD, resulting in improved outcomes.
Objective: To determine whether rates of gestational diabetes among individuals at first live birth changed from 2011 to 2019 and how these rates differ by race and ethnicity in the US.
Design, Setting, and Participants: Serial cross-sectional analysis using National Center for Health Statistics data for 12 610 235 individuals aged 15 to 44 years with singleton first live births from 2011 to 2019 in the US.
Exposures: Gestational diabetes data stratified by the following race and ethnicity groups: Hispanic/Latina (including Central and South American, Cuban, Mexican, and Puerto Rican); non-Hispanic Asian/Pacific Islander (including Asian Indian, Chinese, Filipina, Japanese, Korean, and Vietnamese); non-Hispanic Black; and non-Hispanic White.
Main Outcomes and Measures: The primary outcomes were age-standardized rates of gestational diabetes (per 1000 live births) and respective mean annual percent change and rate ratios (RRs) of gestational diabetes in non-Hispanic Asian/Pacific Islander (overall and in subgroups), non-Hispanic Black, and Hispanic/Latina (overall and in subgroups) individuals relative to non-Hispanic White individuals (referent group).
Results: Among the 12 610 235 included individuals (mean [SD] age, 26.3 [5.8] years), the overall age-standardized gestational diabetes rate significantly increased from 47.6 (95% CI, 47.1-48.0) to 63.5 (95% CI, 63.1-64.0) per 1000 live births from 2011 to 2019, a mean annual percent change of 3.7% (95% CI, 2.8%-4.6%) per year. Of the 12 610 235 participants, 21% were Hispanic/Latina (2019 gestational diabetes rate, 66.6 [95% CI, 65.6-67.7]; RR, 1.15 [95% CI, 1.13-1.18]), 8% were non-Hispanic Asian/Pacific Islander (2019 gestational diabetes rate, 102.7 [95% CI, 100.7-104.7]; RR, 1.78 [95% CI, 1.74-1.82]), 14% were non-Hispanic Black (2019 gestational diabetes rate, 55.7 [95% CI, 54.5-57.0]; RR, 0.97 [95% CI, 0.94-0.99]), and 56% were non-Hispanic White (2019 gestational diabetes rate, 57.7 [95% CI, 57.2-58.3]; referent group). Gestational diabetes rates were highest in Asian Indian participants (2019 gestational diabetes rate, 129.1 [95% CI, 100.7-104.7]; RR, 2.24 [95% CI, 2.15-2.33]). Among Hispanic/Latina participants, gestational diabetes rates were highest among Puerto Rican individuals (2019 gestational diabetes rate, 75.8 [95% CI, 71.8-79.9]; RR, 1.31 [95% CI, 1.24-1.39]). Gestational diabetes rates increased among all race and ethnicity subgroups and across all age groups.
Conclusions and Relevance: Among individuals with a singleton first live birth in the US from 2011 to 2019, rates of gestational diabetes increased across all racial and ethnic subgroups. Differences in absolute gestational diabetes rates were observed across race and ethnicity subgroups.
METHODS: We searched 3 major databases, i.e., PubMed, Embase and Lippincott Williams & Wilkins Journals@Ovid, for studies published up until 1May 2013 without language restrictions. All study designs were included in this review. The studies were identified and retrieved by two independent authors.
RESULTS: Of 118 titles scanned, 14 duplicates were removed, and a total of 13 abstracts from all three databases were identified for full-text retrieval. From the full text, eight articles met the inclusion criteria for this systematic review. These articles showed acceptable quality based on our scoring system. Most of the studies indicated that temporary threshold shifts were much lower when subjects were exposed to a noise level of 85 dBA or lower.
CONCLUSIONS: There were more threshold shifts in subjects adopting 90 dBA compared with 85 dBA. These temporary threshold shifts may progress to permanent shifts over time. Action curtailing noise exposure among employees would be taken earlier on adoption of 85 dBA as the permissible exposure limit, and hence prevalence of noise-induced hearing loss may be reduced.