METHODS: Patients from the TREAT Asia HIV Observational Database (TAHOD) and Australian HIV Observational Database (AHOD) receiving cART between 1999 and 2017 were included. Causes of death verification were based on review of the standardized Cause of Death (CoDe) form designed by the D:A:D group. Cohorts were grouped as AHOD (all high-income sites), TAHOD-high (high/upper-middle income countries) and TAHOD-low (lower-middle income countries). TAHOD sites were split into high/upper-middle income and lower-middle income country settings based on World Bank classifications. Competing risk regression was used to analyse factors associated with AIDS and non-AIDS-related mortality.
RESULTS: Of 10,386 patients, 522 died; 187 from AIDS-related and 335 from non-AIDS-related causes. The overall incidence rate of deaths during follow-up was 0.28 per 100 person-years (/100 PYS) for AIDS and 0.51/100 PYS for non-AIDS. Analysis indicated that the incidence rate of non-AIDS mortality decreased from 0.78/100 PYS to 0.37/100 PYS from year groups 2003 to 2007 to 2013 to 2017 (p
METHODS: Adults living with HIV enrolled in a regional observational cohort in Asia who had initiated combination antiretroviral therapy (cART) were included in the analysis. Factors associated with new TB diagnoses after cohort entry and survival after cART initiation were analysed using Cox regression, stratified by site.
RESULTS: A total of 7355 patients from 12 countries enrolled into the cohort between 2003 and 2016 were included in the study. There were 368 reported cases of TB after cohort entry with an incidence rate of 0.99 per 100 person-years (/100 pys). Multivariate analyses adjusted for viral load (VL), CD4 count, body mass index (BMI) and cART duration showed that CTX reduced the hazard for new TB infection by 28% (HR 0.72, 95% CI l 0.56, 0.93). Mortality after cART initiation was 0.85/100 pys, with a median follow-up time of 4.63 years. Predictors of survival included age, female sex, hepatitis C co-infection, TB diagnosis, HIV VL, CD4 count and BMI.
CONCLUSIONS: CTX was associated with a reduction in the hazard for new TB infection but did not impact survival in our Asian cohort. The potential preventive effect of CTX against TB during periods of severe immunosuppression should be further explored.
METHODS: Data on children with perinatally acquired HIV aged <18 years on first-line, non-nucleoside reverse transcriptase inhibitor-based cART with viral suppression (two consecutive pVL <400 copies/mL over a six-month period) were included from a regional cohort study; those exposed to prior mono- or dual antiretroviral treatment were excluded. Frequency of pVL monitoring was determined at the site-level based on the median rate of pVL measurement: annual 0.75 to 1.5, and semi-annual >1.5 tests/patient/year. Treatment failure was defined as virologic failure (two consecutive pVL >1000 copies/mL), change of antiretroviral drug class, or death. Baseline was the date of the second consecutive pVL <400 copies/mL. Competing risk regression models were used to identify predictors of treatment failure.
RESULTS: During January 2008 to March 2015, there were 1220 eligible children from 10 sites that performed at least annual pVL monitoring, 1042 (85%) and 178 (15%) were from sites performing annual (n = 6) and semi-annual pVL monitoring (n = 4) respectively. Pre-cART, 675 children (55%) had World Health Organization clinical stage 3 or 4, the median nadir CD4 percentage was 9%, and the median pVL was 5.2 log10 copies/mL. At baseline, the median age was 9.2 years, 64% were on nevirapine-based regimens, the median cART duration was 1.6 years, and the median CD4 percentage was 26%. Over the follow-up period, 258 (25%) CLWH with annual and 40 (23%) with semi-annual pVL monitoring developed treatment failure, corresponding to incidence rates of 5.4 (95% CI: 4.8 to 6.1) and 4.3 (95% CI: 3.1 to 5.8) per 100 patient-years of follow-up respectively (p = 0.27). In multivariable analyses, the frequency of pVL monitoring was not associated with treatment failure (adjusted hazard ratio: 1.12; 95% CI: 0.80 to 1.59).
CONCLUSIONS: Annual compared to semi-annual pVL monitoring was not associated with an increased risk of treatment failure in our cohort of virally suppressed children with perinatally acquired HIV on first-line NNRTI-based cART.
METHODS: Data from two regional cohort observational databases were analyzed for trends in median CD4 cell counts at ART initiation and the proportion of late ART initiation (CD4 cell counts <200 cells/mm(3) or prior AIDS diagnosis). Predictors for late ART initiation and mortality were determined.
RESULTS: A total of 2737 HIV-positive ART-naïve patients from 22 sites in 13 Asian countries and territories were eligible. The overall median (IQR) CD4 cell count at ART initiation was 150 (46-241) cells/mm(3). Median CD4 cell counts at ART initiation increased over time, from a low point of 115 cells/mm(3) in 2008 to a peak of 302 cells/mm(3) after 2011 (p for trend 0.002). The proportion of patients with late ART initiation significantly decreased over time from 79.1% before 2007 to 36.3% after 2011 (p for trend <0.001). Factors associated with late ART initiation were year of ART initiation (e.g. 2010 vs. before 2007; OR 0.40, 95% CI 0.27-0.59; p<0.001), sex (male vs. female; OR 1.51, 95% CI 1.18-1.93; p=0.001) and HIV exposure risk (heterosexual vs. homosexual; OR 1.66, 95% CI 1.24-2.23; p=0.001 and intravenous drug use vs. homosexual; OR 3.03, 95% CI 1.77-5.21; p<0.001). Factors associated with mortality after ART initiation were late ART initiation (HR 2.13, 95% CI 1.19-3.79; p=0.010), sex (male vs. female; HR 2.12, 95% CI 1.31-3.43; p=0.002), age (≥51 vs. ≤30 years; HR 3.91, 95% CI 2.18-7.04; p<0.001) and hepatitis C serostatus (positive vs. negative; HR 2.48, 95% CI 1.-4.36; p=0.035).
CONCLUSIONS: Median CD4 cell count at ART initiation among Asian patients significantly increases over time but the proportion of patients with late ART initiation is still significant. ART initiation at higher CD4 cell counts remains a challenge. Strategic interventions to increase earlier diagnosis of HIV infection and prompt more rapid linkage to ART must be implemented.
METHODS: We adapted a dynamic model of HIV transmission among MSM/TW in Lima to incorporate stimulant use and increased HIV risk, suicide and CVD mortality. Among 6% to 24% of MSM/TW using stimulants (mostly cocaine), we modelled an increased risk of unprotected anal sex (RR = 1.35 [95%CI: 1.17 to 1.57]) obtained from local data, and increased risk of suicide (SMR = 6.26 [95%CI: 2.84 to 13.80]) and CVD (SMR = 1.83 [95%CI: 0.39 to 8.57]) mortality associated with cocaine use based on a global systematic review. We estimated the proportion of health harms occurring among MSM/TW who use stimulants in the next year (01-2020/01-2021). We also investigated the 10-year impact (01-2020/01-2030) of: (1) PrEP prioritization for stimulant-using MSM/TW compared to random allocation, and (2) integrating PrEP with a theoretical intervention halving stimulant-associated risk.
RESULTS: MSM/TW in Lima will experience high HIV incidence, suicide mortality and CVD mortality (1.6/100 py, and 0.018/100 py, 0.13/100 py respectively) in 2020. Despite stimulant using MSM/TW comprising an estimated 9.5% (95%CI: 7.8 to 11.5) of all MSM/TW, in the next year, 11% 95%CI (i.e. 2.5% to 97.5% percentile) 10% to 13%) of new HIV infections, 39% (95%CI: 18% to 60%) of suicides and 15% (95%CI: 3% to 44%) of CVD deaths could occur among this group. Scaling up PrEP among all stimulant using MSM/TW could prevent 19% (95%CI: 11% to 31%) more HIV infections over 10 years compared to random allocation. Integrating PrEP and an intervention to halve stimulant-associated risks could reduce new HIV infections by 20% (95%CI: 10% to 37%), suicide deaths by 14% (95%CI: 5% to 27%) and CVD deaths by 3% (95%CI: 0% to 16%) over a decade.
CONCLUSIONS: MSM/TW who use stimulants experience a disproportionate burden of health harms. Prioritizing PrEP based on stimulant use, in addition to sexual behaviour/gender identity criteria, could increase its impact. Integrated substance use, harm reduction, mental health and HIV care among MSM/TW is needed.
METHODS: CLHIV aged <18 years, who were on first-line cART for ≥12 months, and had virological suppression (two consecutive plasma viral load [pVL] <50 copies/mL) were included. Those who started treatment with mono/dual antiretroviral therapy, had a history of treatment interruption >14 days, or received treatment and care at sites with a pVL lower limit of detection >50 copies/mL were excluded. LLV was defined as a pVL 50 to 1000 copies/mL, and VF as a single pVL >1000 copies/mL. Baseline was the time of the second pVL
METHODS: We describe TB diagnosis and screening practices of pediatric antiretroviral treatment (ART) programs in Africa, Asia, the Caribbean, and Central and South America. We used web-based questionnaires to collect data on ART programs and patients seen from March to July 2012. Forty-three ART programs treating children in 23 countries participated in the study.
RESULTS: Sputum microscopy and chest Radiograph were available at all programs, mycobacterial culture in 40 (93%) sites, gastric aspiration in 27 (63%), induced sputum in 23 (54%), and Xpert MTB/RIF in 16 (37%) sites. Screening practices to exclude active TB before starting ART included contact history in 41 sites (84%), symptom screening in 38 (88%), and chest Radiograph in 34 sites (79%). The use of diagnostic tools was examined among 146 children diagnosed with TB during the study period. Chest Radiograph was used in 125 (86%) children, sputum microscopy in 76 (52%), induced sputum microscopy in 38 (26%), gastric aspirate microscopy in 35 (24%), culture in 25 (17%), and Xpert MTB/RIF in 11 (8%) children.
CONCLUSIONS: Induced sputum and Xpert MTB/RIF were infrequently available to diagnose childhood TB, and screening was largely based on symptom identification. There is an urgent need to improve the capacity of ART programs in low- and middle-income countries to exclude and diagnose TB in HIV-infected children.
Methods: Study end points were as follows: (1) a CD4 count <200 cells/mm3 followed by a CD4 count ≥200 cells/mm3 (transient CD4 <200); (2) CD4 count <200 cells/mm3 confirmed within 6 months (confirmed CD4 <200); and (3) a new or recurrent World Health Organization (WHO) stage 3 or 4 illness (clinical failure). Kaplan-Meier curves and Cox regression were used to evaluate rates and predictors of transient CD4 <200, confirmed CD4 <200, and clinical failure among virally suppressed children aged 5-15 years who were enrolled in the TREAT Asia Pediatric HIV Observational Database.
Results: Data from 967 children were included in the analysis. At the time of confirmed viral suppression, median age was 10.2 years, 50.4% of children were female, and 95.4% were perinatally infected with HIV. Median CD4 cell count was 837 cells/mm3, and 54.8% of children were classified as having WHO stage 3 or 4 disease. In total, 18 transient CD4 <200 events, 2 confirmed CD4 <200 events, and10 clinical failures occurred at rates of 0.73 (95% confidence interval [95% CI], 0.46-1.16), 0.08 (95% CI, 0.02-0.32), and 0.40 (95% CI, 0.22-0.75) events per 100 patient-years, respectively. CD4 <500 cells/mm3 at the time of viral suppression confirmation was associated with higher rates of both CD4 outcomes.
Conclusions: Regular CD4 testing may be unnecessary for virally suppressed children aged 5-15 years with CD4 ≥500 cells/mm3.
METHODS: Between June 2015 and August 2016, 50 HIV-positive TGW were recruited in Lima, Peru. Multivariable logistic regression was used to identify factors associated with viral suppression (<200 copies/mL) among the TGW.
RESULTS: Among TGW, 85% achieved viral suppression. Approximately half (54%) reported anal sex with more than five partners in the past 6 months, 38% reported sex work, 68% had not disclosed their HIV status to one or more of their partners, and 38% reported condomless sex with their last partner. The prevalence of alcohol use disorders was high (54%), and 38% reported use of drugs in the past year. Moderate-to-severe drug use significantly reduced odds of achieving viral suppression (adjusted odds ratio 0.69; 95% confidence interval: 0.48-0.98).
CONCLUSION: Our findings highlight the need for integrated treatment for substance disorders in HIV care to increase the viral suppression rate among TGW in Lima, Peru.
METHODS AND FINDINGS: We reviewed all GenBank submissions of HIV-1 reverse transcriptase sequences with or without protease and identified 287 studies published between March 1, 2000, and December 31, 2013, with more than 25 recently or chronically infected ARV-naïve individuals. These studies comprised 50,870 individuals from 111 countries. Each set of study sequences was analyzed for phylogenetic clustering and the presence of 93 surveillance drug-resistance mutations (SDRMs). The median overall TDR prevalence in sub-Saharan Africa (SSA), south/southeast Asia (SSEA), upper-income Asian countries, Latin America/Caribbean, Europe, and North America was 2.8%, 2.9%, 5.6%, 7.6%, 9.4%, and 11.5%, respectively. In SSA, there was a yearly 1.09-fold (95% CI: 1.05-1.14) increase in odds of TDR since national ARV scale-up attributable to an increase in non-nucleoside reverse transcriptase inhibitor (NNRTI) resistance. The odds of NNRTI-associated TDR also increased in Latin America/Caribbean (odds ratio [OR] = 1.16; 95% CI: 1.06-1.25), North America (OR = 1.19; 95% CI: 1.12-1.26), Europe (OR = 1.07; 95% CI: 1.01-1.13), and upper-income Asian countries (OR = 1.33; 95% CI: 1.12-1.55). In SSEA, there was no significant change in the odds of TDR since national ARV scale-up (OR = 0.97; 95% CI: 0.92-1.02). An analysis limited to sequences with mixtures at less than 0.5% of their nucleotide positions—a proxy for recent infection—yielded trends comparable to those obtained using the complete dataset. Four NNRTI SDRMs—K101E, K103N, Y181C, and G190A—accounted for >80% of NNRTI-associated TDR in all regions and subtypes. Sixteen nucleoside reverse transcriptase inhibitor (NRTI) SDRMs accounted for >69% of NRTI-associated TDR in all regions and subtypes. In SSA and SSEA, 89% of NNRTI SDRMs were associated with high-level resistance to nevirapine or efavirenz, whereas only 27% of NRTI SDRMs were associated with high-level resistance to zidovudine, lamivudine, tenofovir, or abacavir. Of 763 viruses with TDR in SSA and SSEA, 725 (95%) were genetically dissimilar; 38 (5%) formed 19 sequence pairs. Inherent limitations of this study are that some cohorts may not represent the broader regional population and that studies were heterogeneous with respect to duration of infection prior to sampling.
CONCLUSIONS: Most TDR strains in SSA and SSEA arose independently, suggesting that ARV regimens with a high genetic barrier to resistance combined with improved patient adherence may mitigate TDR increases by reducing the generation of new ARV-resistant strains. A small number of NNRTI-resistance mutations were responsible for most cases of high-level resistance, suggesting that inexpensive point-mutation assays to detect these mutations may be useful for pre-therapy screening in regions with high levels of TDR. In the context of a public health approach to ARV therapy, a reliable point-of-care genotypic resistance test could identify which patients should receive standard first-line therapy and which should receive a protease-inhibitor-containing regimen.
METHODS AND FINDINGS: This is a retrospective cohort study of all adult people living with HIV (PLWH) incarcerated in Connecticut, US, during the period January 1, 2007, to December 31, 2011, and observed through December 31, 2014 (n = 1,094). Most cohort participants were unmarried (83.7%) men (77.0%) who were black or Hispanic (78.1%) and acquired HIV from injection drug use (72.6%). Prison-based pharmacy and custody databases were linked with community HIV surveillance monitoring and case management databases. Post-release RIC declined steadily over 3 years of follow-up (67.2% retained for year 1, 51.3% retained for years 1-2, and 42.5% retained for years 1-3). Compared with individuals who were not re-incarcerated, individuals who were re-incarcerated were more likely to meet RIC criteria (48% versus 34%; p < 0.001) but less likely to have VS (72% versus 81%; p = 0.048). Using multivariable logistic regression models (individual-level analysis for 1,001 individuals after excluding 93 deaths), both sustained RIC and VS at 3 years post-release were independently associated with older age (RIC: adjusted odds ratio [AOR] = 1.61, 95% CI = 1.22-2.12; VS: AOR = 1.37, 95% CI = 1.06-1.78), having health insurance (RIC: AOR = 2.15, 95% CI = 1.60-2.89; VS: AOR = 2.01, 95% CI = 1.53-2.64), and receiving an increased number of transitional case management visits. The same factors were significant when we assessed RIC and VS outcomes in each 6-month period using generalized estimating equations (for 1,094 individuals contributing 6,227 6-month periods prior to death or censoring). Additionally, receipt of antiretroviral therapy during incarceration (RIC: AOR = 1.33, 95% CI 1.07-1.65; VS: AOR = 1.91, 95% CI = 1.56-2.34), early linkage to care post-release (RIC: AOR = 2.64, 95% CI = 2.03-3.43; VS: AOR = 1.79; 95% CI = 1.45-2.21), and absolute time and proportion of follow-up time spent re-incarcerated were highly correlated with better treatment outcomes. Limited data were available on changes over time in injection drug use or other substance use disorders, psychiatric disorders, or housing status.
CONCLUSIONS: In a large cohort of CJ-involved PLWH with a 3-year post-release evaluation, RIC diminished significantly over time, but was associated with HIV care during incarceration, health insurance, case management services, and early linkage to care post-release. While re-incarceration and conditional release provide opportunities to engage in care, reducing recidivism and supporting community-based RIC efforts are key to improving longitudinal treatment outcomes among CJ-involved PLWH.
METHODS: Blips were defined as detectable VL (≥ 50 copies/mL) preceded and followed by undetectable VL (<50 copies/mL). Virological failure (VF) was defined as two consecutive VL ≥50 copies/ml. Cox proportional hazard models of time to first VF after entry, were developed.
RESULTS: 5040 patients (AHOD n = 2597 and TAHOD n = 2521) were included; 910 (18%) of patients experienced blips. 744 (21%) and 166 (11%) of high- and middle/low-income participants, respectively, experienced blips ever. 711 (14%) experienced blips prior to virological failure. 559 (16%) and 152 (10%) of high- and middle/low-income participants, respectively, experienced blips prior to virological failure. VL testing occurred at a median frequency of 175 and 91 days in middle/low- and high-income sites, respectively. Longer time to VF occurred in middle/low income sites, compared with high-income sites (adjusted hazards ratio (AHR) 0.41; p<0.001), adjusted for year of first cART, Hepatitis C co-infection, cART regimen, and prior blips. Prior blips were not a significant predictor of VF in univariate analysis (AHR 0.97, p = 0.82). Differing magnitudes of blips were not significant in univariate analyses as predictors of virological failure (p = 0.360 for blip 50-≤1000, p = 0.309 for blip 50-≤400 and p = 0.300 for blip 50-≤200). 209 of 866 (24%) patients were switched to an alternate regimen in the setting of a blip.
CONCLUSION: Despite a lower proportion of blips occurring in low/middle-income settings, no significant difference was found between settings. Nonetheless, a substantial number of participants were switched to alternative regimens in the setting of blips.