Methods: Study end points were as follows: (1) a CD4 count <200 cells/mm3 followed by a CD4 count ≥200 cells/mm3 (transient CD4 <200); (2) CD4 count <200 cells/mm3 confirmed within 6 months (confirmed CD4 <200); and (3) a new or recurrent World Health Organization (WHO) stage 3 or 4 illness (clinical failure). Kaplan-Meier curves and Cox regression were used to evaluate rates and predictors of transient CD4 <200, confirmed CD4 <200, and clinical failure among virally suppressed children aged 5-15 years who were enrolled in the TREAT Asia Pediatric HIV Observational Database.
Results: Data from 967 children were included in the analysis. At the time of confirmed viral suppression, median age was 10.2 years, 50.4% of children were female, and 95.4% were perinatally infected with HIV. Median CD4 cell count was 837 cells/mm3, and 54.8% of children were classified as having WHO stage 3 or 4 disease. In total, 18 transient CD4 <200 events, 2 confirmed CD4 <200 events, and10 clinical failures occurred at rates of 0.73 (95% confidence interval [95% CI], 0.46-1.16), 0.08 (95% CI, 0.02-0.32), and 0.40 (95% CI, 0.22-0.75) events per 100 patient-years, respectively. CD4 <500 cells/mm3 at the time of viral suppression confirmation was associated with higher rates of both CD4 outcomes.
Conclusions: Regular CD4 testing may be unnecessary for virally suppressed children aged 5-15 years with CD4 ≥500 cells/mm3.
METHODS: The study population consisted of HIV-infected patients enrolled in the TREAT Asia HIV Observational Database (TAHOD). Individuals were included in this analysis if they started combination antiretroviral treatment (cART) after 2002, were being treated at a centre that documented a median rate of viral load monitoring ≥0.8 tests/patient/year among TAHOD enrolees, and experienced a minor or major treatment substitution while on virally suppressive cART. The primary endpoint to evaluate outcomes was clinical or virological failure (VF), followed by an ART class change. Clinical failure was defined as death or an AIDS diagnosis. VF was defined as confirmed viral load measurements ≥400 copies/mL followed by an ART class change within six months. Minor regimen substitutions were defined as within-class changes and major regimen substitutions were defined as changes to a drug class. The patterns of substitutions and rate of clinical or VF after substitutions were analyzed.
RESULTS: Of 3994 adults who started ART after 2002, 3119 (78.1%) had at least one period of virological suppression. Among these, 1170 (37.5%) underwent a minor regimen substitution, and 296 (9.5%) underwent a major regimen substitution during suppression. The rates of clinical or VF were 1.48/100 person years (95% CI 1.14 to 1.91) in the minor substitution group, 2.85/100 person years (95% CI 1.88 to 4.33) in the major substitution group and 2.53/100 person years (95% CI 2.20 to 2.92) among patients that did not undergo a treatment substitution.
CONCLUSIONS: The rate of clinical or VF was low in both major and minor substitution groups, showing that regimen substitution is generally effective in non-clinical trial settings in Asian countries.
METHODS: To create a retrospective cohort of all adults with HIV released from jails and prisons in Connecticut, USA (2007-14), we linked administrative custody and pharmacy databases with mandatory HIV/AIDS surveillance monitoring and case management data. We examined time to LTC (defined as first viral load measurement after release) and viral suppression at LTC. We used generalised estimating equations to show predictors of LTC within 14 days and 30 days of release.
FINDINGS: Among 3302 incarceration periods for 1350 individuals between 2007 and 2014, 672 (21%) of 3181 periods had LTC within 14 days of release, 1042 (34%) of 3064 had LTC within 30 days of release, and 301 (29%) of 1042 had detectable viral loads at LTC. Factors positively associated with LTC within 14 days of release are intermediate (31-364 days) incarceration duration (adjusted odds ratio 1·52; 95% CI 1·19-1·95), and transitional case management (1·65; 1·36-1·99), receipt of antiretroviral therapy during incarceration (1·39; 1·11-1·74), and two or more medical comorbidities (1·86; 1·48-2·36). Reincarceration (0·70; 0·56-0·88) and conditional release (0·62; 0·50-0·78) were negatively associated with LTC within 14 days. Hispanic ethnicity, bonded release, and psychiatric comorbidity were also associated with LTC within 30 days but reincarceration was not.
INTERPRETATION: LTC after release is suboptimal but improves when inmates' medical, psychiatric, and case management needs are identified and addressed before release. People who are rapidly cycling through jail facilities are particularly vulnerable to missed linkage opportunities. The use of integrated programmes to align justice and health-care goals has great potential to improve long-term HIV treatment outcomes.
FUNDING: US National Institutes of Health.
METHODS AND FINDINGS: We reviewed all GenBank submissions of HIV-1 reverse transcriptase sequences with or without protease and identified 287 studies published between March 1, 2000, and December 31, 2013, with more than 25 recently or chronically infected ARV-naïve individuals. These studies comprised 50,870 individuals from 111 countries. Each set of study sequences was analyzed for phylogenetic clustering and the presence of 93 surveillance drug-resistance mutations (SDRMs). The median overall TDR prevalence in sub-Saharan Africa (SSA), south/southeast Asia (SSEA), upper-income Asian countries, Latin America/Caribbean, Europe, and North America was 2.8%, 2.9%, 5.6%, 7.6%, 9.4%, and 11.5%, respectively. In SSA, there was a yearly 1.09-fold (95% CI: 1.05-1.14) increase in odds of TDR since national ARV scale-up attributable to an increase in non-nucleoside reverse transcriptase inhibitor (NNRTI) resistance. The odds of NNRTI-associated TDR also increased in Latin America/Caribbean (odds ratio [OR] = 1.16; 95% CI: 1.06-1.25), North America (OR = 1.19; 95% CI: 1.12-1.26), Europe (OR = 1.07; 95% CI: 1.01-1.13), and upper-income Asian countries (OR = 1.33; 95% CI: 1.12-1.55). In SSEA, there was no significant change in the odds of TDR since national ARV scale-up (OR = 0.97; 95% CI: 0.92-1.02). An analysis limited to sequences with mixtures at less than 0.5% of their nucleotide positions—a proxy for recent infection—yielded trends comparable to those obtained using the complete dataset. Four NNRTI SDRMs—K101E, K103N, Y181C, and G190A—accounted for >80% of NNRTI-associated TDR in all regions and subtypes. Sixteen nucleoside reverse transcriptase inhibitor (NRTI) SDRMs accounted for >69% of NRTI-associated TDR in all regions and subtypes. In SSA and SSEA, 89% of NNRTI SDRMs were associated with high-level resistance to nevirapine or efavirenz, whereas only 27% of NRTI SDRMs were associated with high-level resistance to zidovudine, lamivudine, tenofovir, or abacavir. Of 763 viruses with TDR in SSA and SSEA, 725 (95%) were genetically dissimilar; 38 (5%) formed 19 sequence pairs. Inherent limitations of this study are that some cohorts may not represent the broader regional population and that studies were heterogeneous with respect to duration of infection prior to sampling.
CONCLUSIONS: Most TDR strains in SSA and SSEA arose independently, suggesting that ARV regimens with a high genetic barrier to resistance combined with improved patient adherence may mitigate TDR increases by reducing the generation of new ARV-resistant strains. A small number of NNRTI-resistance mutations were responsible for most cases of high-level resistance, suggesting that inexpensive point-mutation assays to detect these mutations may be useful for pre-therapy screening in regions with high levels of TDR. In the context of a public health approach to ARV therapy, a reliable point-of-care genotypic resistance test could identify which patients should receive standard first-line therapy and which should receive a protease-inhibitor-containing regimen.
METHODS: We compared these regimens with respect to clinical, immunologic, and virologic outcomes using data from prospective studies of human immunodeficiency virus (HIV)-infected individuals in Europe and the United States in the HIV-CAUSAL Collaboration, 2004-2013. Antiretroviral therapy-naive and AIDS-free individuals were followed from the time they started a lopinavir or an atazanavir regimen. We estimated the 'intention-to-treat' effect for atazanavir vs lopinavir regimens on each of the outcomes.
RESULTS: A total of 6668 individuals started a lopinavir regimen (213 deaths, 457 AIDS-defining illnesses or deaths), and 4301 individuals started an atazanavir regimen (83 deaths, 157 AIDS-defining illnesses or deaths). The adjusted intention-to-treat hazard ratios for atazanavir vs lopinavir regimens were 0.70 (95% confidence interval [CI], .53-.91) for death, 0.67 (95% CI, .55-.82) for AIDS-defining illness or death, and 0.91 (95% CI, .84-.99) for virologic failure at 12 months. The mean 12-month increase in CD4 count was 8.15 (95% CI, -.13 to 16.43) cells/µL higher in the atazanavir group. Estimates differed by NRTI backbone.
CONCLUSIONS: Our estimates are consistent with a lower mortality, a lower incidence of AIDS-defining illness, a greater 12-month increase in CD4 cell count, and a smaller risk of virologic failure at 12 months for atazanavir compared with lopinavir regimens.