METHODS: We compared these regimens with respect to clinical, immunologic, and virologic outcomes using data from prospective studies of human immunodeficiency virus (HIV)-infected individuals in Europe and the United States in the HIV-CAUSAL Collaboration, 2004-2013. Antiretroviral therapy-naive and AIDS-free individuals were followed from the time they started a lopinavir or an atazanavir regimen. We estimated the 'intention-to-treat' effect for atazanavir vs lopinavir regimens on each of the outcomes.
RESULTS: A total of 6668 individuals started a lopinavir regimen (213 deaths, 457 AIDS-defining illnesses or deaths), and 4301 individuals started an atazanavir regimen (83 deaths, 157 AIDS-defining illnesses or deaths). The adjusted intention-to-treat hazard ratios for atazanavir vs lopinavir regimens were 0.70 (95% confidence interval [CI], .53-.91) for death, 0.67 (95% CI, .55-.82) for AIDS-defining illness or death, and 0.91 (95% CI, .84-.99) for virologic failure at 12 months. The mean 12-month increase in CD4 count was 8.15 (95% CI, -.13 to 16.43) cells/µL higher in the atazanavir group. Estimates differed by NRTI backbone.
CONCLUSIONS: Our estimates are consistent with a lower mortality, a lower incidence of AIDS-defining illness, a greater 12-month increase in CD4 cell count, and a smaller risk of virologic failure at 12 months for atazanavir compared with lopinavir regimens.
METHODS: We used Cox regression to analyze data of a cohort of Asian children.
RESULTS: A total of 2608 children were included; median age at cART was 5.7 years. Time-updated weight for age z score < -3 was associated with mortality (P < 0.001) independent of CD4% and < -2 was associated with immunological failure (P ≤ 0.03) independent of age at cART.
CONCLUSIONS: Weight monitoring provides useful data to inform clinical management of children on cART in resource-limited settings.
METHODS: Factors associated with survival and failure were analyzed using Cox proportional hazards and discrete time conditional logistic models.
RESULTS: TDR, found in 60 (4.1%) of 1471 Asian treatment-naive patients, was one of the significant predictors of failure. Patients with TDR to >1 drug in their regimen were >3 times as likely to fail compared to no TDR.
CONCLUSIONS: TDR was associated with failure in the context of non-fully sensitive regimens. Efforts are needed to incorporate resistance testing into national treatment programs.
METHODS: Blips were defined as detectable VL (≥ 50 copies/mL) preceded and followed by undetectable VL (<50 copies/mL). Virological failure (VF) was defined as two consecutive VL ≥50 copies/ml. Cox proportional hazard models of time to first VF after entry, were developed.
RESULTS: 5040 patients (AHOD n = 2597 and TAHOD n = 2521) were included; 910 (18%) of patients experienced blips. 744 (21%) and 166 (11%) of high- and middle/low-income participants, respectively, experienced blips ever. 711 (14%) experienced blips prior to virological failure. 559 (16%) and 152 (10%) of high- and middle/low-income participants, respectively, experienced blips prior to virological failure. VL testing occurred at a median frequency of 175 and 91 days in middle/low- and high-income sites, respectively. Longer time to VF occurred in middle/low income sites, compared with high-income sites (adjusted hazards ratio (AHR) 0.41; p<0.001), adjusted for year of first cART, Hepatitis C co-infection, cART regimen, and prior blips. Prior blips were not a significant predictor of VF in univariate analysis (AHR 0.97, p = 0.82). Differing magnitudes of blips were not significant in univariate analyses as predictors of virological failure (p = 0.360 for blip 50-≤1000, p = 0.309 for blip 50-≤400 and p = 0.300 for blip 50-≤200). 209 of 866 (24%) patients were switched to an alternate regimen in the setting of a blip.
CONCLUSION: Despite a lower proportion of blips occurring in low/middle-income settings, no significant difference was found between settings. Nonetheless, a substantial number of participants were switched to alternative regimens in the setting of blips.
METHODS AND FINDINGS: We reviewed all GenBank submissions of HIV-1 reverse transcriptase sequences with or without protease and identified 287 studies published between March 1, 2000, and December 31, 2013, with more than 25 recently or chronically infected ARV-naïve individuals. These studies comprised 50,870 individuals from 111 countries. Each set of study sequences was analyzed for phylogenetic clustering and the presence of 93 surveillance drug-resistance mutations (SDRMs). The median overall TDR prevalence in sub-Saharan Africa (SSA), south/southeast Asia (SSEA), upper-income Asian countries, Latin America/Caribbean, Europe, and North America was 2.8%, 2.9%, 5.6%, 7.6%, 9.4%, and 11.5%, respectively. In SSA, there was a yearly 1.09-fold (95% CI: 1.05-1.14) increase in odds of TDR since national ARV scale-up attributable to an increase in non-nucleoside reverse transcriptase inhibitor (NNRTI) resistance. The odds of NNRTI-associated TDR also increased in Latin America/Caribbean (odds ratio [OR] = 1.16; 95% CI: 1.06-1.25), North America (OR = 1.19; 95% CI: 1.12-1.26), Europe (OR = 1.07; 95% CI: 1.01-1.13), and upper-income Asian countries (OR = 1.33; 95% CI: 1.12-1.55). In SSEA, there was no significant change in the odds of TDR since national ARV scale-up (OR = 0.97; 95% CI: 0.92-1.02). An analysis limited to sequences with mixtures at less than 0.5% of their nucleotide positions—a proxy for recent infection—yielded trends comparable to those obtained using the complete dataset. Four NNRTI SDRMs—K101E, K103N, Y181C, and G190A—accounted for >80% of NNRTI-associated TDR in all regions and subtypes. Sixteen nucleoside reverse transcriptase inhibitor (NRTI) SDRMs accounted for >69% of NRTI-associated TDR in all regions and subtypes. In SSA and SSEA, 89% of NNRTI SDRMs were associated with high-level resistance to nevirapine or efavirenz, whereas only 27% of NRTI SDRMs were associated with high-level resistance to zidovudine, lamivudine, tenofovir, or abacavir. Of 763 viruses with TDR in SSA and SSEA, 725 (95%) were genetically dissimilar; 38 (5%) formed 19 sequence pairs. Inherent limitations of this study are that some cohorts may not represent the broader regional population and that studies were heterogeneous with respect to duration of infection prior to sampling.
CONCLUSIONS: Most TDR strains in SSA and SSEA arose independently, suggesting that ARV regimens with a high genetic barrier to resistance combined with improved patient adherence may mitigate TDR increases by reducing the generation of new ARV-resistant strains. A small number of NNRTI-resistance mutations were responsible for most cases of high-level resistance, suggesting that inexpensive point-mutation assays to detect these mutations may be useful for pre-therapy screening in regions with high levels of TDR. In the context of a public health approach to ARV therapy, a reliable point-of-care genotypic resistance test could identify which patients should receive standard first-line therapy and which should receive a protease-inhibitor-containing regimen.
METHODS: Adults with HIV, who have been taking ART for more than 3 months were randomly assigned to receive either a pharmacist-led intervention or their usual care. Measures of adherence were collected at 1) baseline 2) just prior to delivery of intervention and 3) 8 weeks later. The primary outcomes were CD4 cell count and self-reported adherence measured with the AIDS Clinical Trial Group (ACTG) questionnaire.
RESULTS: Post-intervention, the intervention group showed a statistically significant increase in CD4 cell counts as compared to the usual care group (p = 0.0054). In addition, adherence improved in the intervention group, with participants being 5.96 times more likely to report having not missed their medication for longer periods of time (p = 0.0086) while participants in the intervention group were 7.74 times more likely to report missing their ART less frequently (p HIV management.
TRIAL REGISTRATION: The trial is registered with Australian New Zealand Clinical Trials Registry ( ACTRN12618001882213 ). Registered 20 November 2018.
METHODS: HIV+ patients from the Australian HIV Observational Database (AHOD) and the TREAT Asia HIV Observational Database (TAHOD) meeting specific criteria were included. In these analyses Asian and Caucasian status were defined by cohort. Factors associated with a low CD4:CD8 ratio (cutoff <0.2) prior to ART commencement, and with achieving a normal CD4:CD8 ratio (>1) at 12 and 24 months post ART commencement were assessed using logistic regression.
RESULTS: There were 591 patients from AHOD and 2,620 patients from TAHOD who met the inclusion criteria. TAHOD patients had a significantly (P<0.001) lower odds of having a baseline (prior to ART initiation) CD4:CD8 ratio greater than 0.2. After 12 months of ART, AHOD patients were more than twice as likely to achieve a normal CD4:CD8 ratio compared to TAHOD patients (15% versus 6%). However, after adjustment for confounding factors there was no significant difference between cohorts in the odds of achieving a CD4:CD8 ratio >1 (P=0.475).
CONCLUSIONS: We found a significantly lower CD4:CD8 ratio prior to commencing ART in TAHOD compared to AHOD even after adjusting for confounders. However, after adjustment, there was no significant difference between the cohorts in odds of achieving normal ratio. Baseline CD4+ and CD8+ counts seem to be the main driver for this difference between these two populations.