METHODS: Blips were defined as detectable VL (≥ 50 copies/mL) preceded and followed by undetectable VL (<50 copies/mL). Virological failure (VF) was defined as two consecutive VL ≥50 copies/ml. Cox proportional hazard models of time to first VF after entry, were developed.
RESULTS: 5040 patients (AHOD n = 2597 and TAHOD n = 2521) were included; 910 (18%) of patients experienced blips. 744 (21%) and 166 (11%) of high- and middle/low-income participants, respectively, experienced blips ever. 711 (14%) experienced blips prior to virological failure. 559 (16%) and 152 (10%) of high- and middle/low-income participants, respectively, experienced blips prior to virological failure. VL testing occurred at a median frequency of 175 and 91 days in middle/low- and high-income sites, respectively. Longer time to VF occurred in middle/low income sites, compared with high-income sites (adjusted hazards ratio (AHR) 0.41; p<0.001), adjusted for year of first cART, Hepatitis C co-infection, cART regimen, and prior blips. Prior blips were not a significant predictor of VF in univariate analysis (AHR 0.97, p = 0.82). Differing magnitudes of blips were not significant in univariate analyses as predictors of virological failure (p = 0.360 for blip 50-≤1000, p = 0.309 for blip 50-≤400 and p = 0.300 for blip 50-≤200). 209 of 866 (24%) patients were switched to an alternate regimen in the setting of a blip.
CONCLUSION: Despite a lower proportion of blips occurring in low/middle-income settings, no significant difference was found between settings. Nonetheless, a substantial number of participants were switched to alternative regimens in the setting of blips.
METHODS: Data from two regional cohort observational databases were analyzed for trends in median CD4 cell counts at ART initiation and the proportion of late ART initiation (CD4 cell counts <200 cells/mm(3) or prior AIDS diagnosis). Predictors for late ART initiation and mortality were determined.
RESULTS: A total of 2737 HIV-positive ART-naïve patients from 22 sites in 13 Asian countries and territories were eligible. The overall median (IQR) CD4 cell count at ART initiation was 150 (46-241) cells/mm(3). Median CD4 cell counts at ART initiation increased over time, from a low point of 115 cells/mm(3) in 2008 to a peak of 302 cells/mm(3) after 2011 (p for trend 0.002). The proportion of patients with late ART initiation significantly decreased over time from 79.1% before 2007 to 36.3% after 2011 (p for trend <0.001). Factors associated with late ART initiation were year of ART initiation (e.g. 2010 vs. before 2007; OR 0.40, 95% CI 0.27-0.59; p<0.001), sex (male vs. female; OR 1.51, 95% CI 1.18-1.93; p=0.001) and HIV exposure risk (heterosexual vs. homosexual; OR 1.66, 95% CI 1.24-2.23; p=0.001 and intravenous drug use vs. homosexual; OR 3.03, 95% CI 1.77-5.21; p<0.001). Factors associated with mortality after ART initiation were late ART initiation (HR 2.13, 95% CI 1.19-3.79; p=0.010), sex (male vs. female; HR 2.12, 95% CI 1.31-3.43; p=0.002), age (≥51 vs. ≤30 years; HR 3.91, 95% CI 2.18-7.04; p<0.001) and hepatitis C serostatus (positive vs. negative; HR 2.48, 95% CI 1.-4.36; p=0.035).
CONCLUSIONS: Median CD4 cell count at ART initiation among Asian patients significantly increases over time but the proportion of patients with late ART initiation is still significant. ART initiation at higher CD4 cell counts remains a challenge. Strategic interventions to increase earlier diagnosis of HIV infection and prompt more rapid linkage to ART must be implemented.
METHODS: We describe TB diagnosis and screening practices of pediatric antiretroviral treatment (ART) programs in Africa, Asia, the Caribbean, and Central and South America. We used web-based questionnaires to collect data on ART programs and patients seen from March to July 2012. Forty-three ART programs treating children in 23 countries participated in the study.
RESULTS: Sputum microscopy and chest Radiograph were available at all programs, mycobacterial culture in 40 (93%) sites, gastric aspiration in 27 (63%), induced sputum in 23 (54%), and Xpert MTB/RIF in 16 (37%) sites. Screening practices to exclude active TB before starting ART included contact history in 41 sites (84%), symptom screening in 38 (88%), and chest Radiograph in 34 sites (79%). The use of diagnostic tools was examined among 146 children diagnosed with TB during the study period. Chest Radiograph was used in 125 (86%) children, sputum microscopy in 76 (52%), induced sputum microscopy in 38 (26%), gastric aspirate microscopy in 35 (24%), culture in 25 (17%), and Xpert MTB/RIF in 11 (8%) children.
CONCLUSIONS: Induced sputum and Xpert MTB/RIF were infrequently available to diagnose childhood TB, and screening was largely based on symptom identification. There is an urgent need to improve the capacity of ART programs in low- and middle-income countries to exclude and diagnose TB in HIV-infected children.
METHODS AND FINDINGS: We reviewed all GenBank submissions of HIV-1 reverse transcriptase sequences with or without protease and identified 287 studies published between March 1, 2000, and December 31, 2013, with more than 25 recently or chronically infected ARV-naïve individuals. These studies comprised 50,870 individuals from 111 countries. Each set of study sequences was analyzed for phylogenetic clustering and the presence of 93 surveillance drug-resistance mutations (SDRMs). The median overall TDR prevalence in sub-Saharan Africa (SSA), south/southeast Asia (SSEA), upper-income Asian countries, Latin America/Caribbean, Europe, and North America was 2.8%, 2.9%, 5.6%, 7.6%, 9.4%, and 11.5%, respectively. In SSA, there was a yearly 1.09-fold (95% CI: 1.05-1.14) increase in odds of TDR since national ARV scale-up attributable to an increase in non-nucleoside reverse transcriptase inhibitor (NNRTI) resistance. The odds of NNRTI-associated TDR also increased in Latin America/Caribbean (odds ratio [OR] = 1.16; 95% CI: 1.06-1.25), North America (OR = 1.19; 95% CI: 1.12-1.26), Europe (OR = 1.07; 95% CI: 1.01-1.13), and upper-income Asian countries (OR = 1.33; 95% CI: 1.12-1.55). In SSEA, there was no significant change in the odds of TDR since national ARV scale-up (OR = 0.97; 95% CI: 0.92-1.02). An analysis limited to sequences with mixtures at less than 0.5% of their nucleotide positions—a proxy for recent infection—yielded trends comparable to those obtained using the complete dataset. Four NNRTI SDRMs—K101E, K103N, Y181C, and G190A—accounted for >80% of NNRTI-associated TDR in all regions and subtypes. Sixteen nucleoside reverse transcriptase inhibitor (NRTI) SDRMs accounted for >69% of NRTI-associated TDR in all regions and subtypes. In SSA and SSEA, 89% of NNRTI SDRMs were associated with high-level resistance to nevirapine or efavirenz, whereas only 27% of NRTI SDRMs were associated with high-level resistance to zidovudine, lamivudine, tenofovir, or abacavir. Of 763 viruses with TDR in SSA and SSEA, 725 (95%) were genetically dissimilar; 38 (5%) formed 19 sequence pairs. Inherent limitations of this study are that some cohorts may not represent the broader regional population and that studies were heterogeneous with respect to duration of infection prior to sampling.
CONCLUSIONS: Most TDR strains in SSA and SSEA arose independently, suggesting that ARV regimens with a high genetic barrier to resistance combined with improved patient adherence may mitigate TDR increases by reducing the generation of new ARV-resistant strains. A small number of NNRTI-resistance mutations were responsible for most cases of high-level resistance, suggesting that inexpensive point-mutation assays to detect these mutations may be useful for pre-therapy screening in regions with high levels of TDR. In the context of a public health approach to ARV therapy, a reliable point-of-care genotypic resistance test could identify which patients should receive standard first-line therapy and which should receive a protease-inhibitor-containing regimen.
METHODS: We compared these regimens with respect to clinical, immunologic, and virologic outcomes using data from prospective studies of human immunodeficiency virus (HIV)-infected individuals in Europe and the United States in the HIV-CAUSAL Collaboration, 2004-2013. Antiretroviral therapy-naive and AIDS-free individuals were followed from the time they started a lopinavir or an atazanavir regimen. We estimated the 'intention-to-treat' effect for atazanavir vs lopinavir regimens on each of the outcomes.
RESULTS: A total of 6668 individuals started a lopinavir regimen (213 deaths, 457 AIDS-defining illnesses or deaths), and 4301 individuals started an atazanavir regimen (83 deaths, 157 AIDS-defining illnesses or deaths). The adjusted intention-to-treat hazard ratios for atazanavir vs lopinavir regimens were 0.70 (95% confidence interval [CI], .53-.91) for death, 0.67 (95% CI, .55-.82) for AIDS-defining illness or death, and 0.91 (95% CI, .84-.99) for virologic failure at 12 months. The mean 12-month increase in CD4 count was 8.15 (95% CI, -.13 to 16.43) cells/µL higher in the atazanavir group. Estimates differed by NRTI backbone.
CONCLUSIONS: Our estimates are consistent with a lower mortality, a lower incidence of AIDS-defining illness, a greater 12-month increase in CD4 cell count, and a smaller risk of virologic failure at 12 months for atazanavir compared with lopinavir regimens.
METHODS: In a regional HIV observational cohort in the Asia-Pacific region, patients with viral suppression (2 consecutive viral loads <400 copies/mL) and a CD4 count ≥200 cells per microliter who had CD4 testing 6 monthly were analyzed. Main study end points were occurrence of 1 CD4 count <200 cells per microliter (single CD4 <200) and 2 CD4 counts <200 cells per microliter within a 6-month period (confirmed CD4 <200). A comparison of time with single and confirmed CD4 <200 with biannual or annual CD4 assessment was performed by generating a hypothetical group comprising the same patients with annual CD4 testing by removing every second CD4 count.
RESULTS: Among 1538 patients, the rate of single CD4 <200 was 3.45/100 patient-years and of confirmed CD4 <200 was 0.77/100 patient-years. During 5 years of viral suppression, patients with baseline CD4 200-249 cells per microliter were significantly more likely to experience confirmed CD4 <200 compared with patients with higher baseline CD4 [hazard ratio, 55.47 (95% confidence interval: 7.36 to 418.20), P < 0.001 versus baseline CD4 ≥500 cells/μL]. Cumulative probabilities of confirmed CD4 <200 was also higher in patients with baseline CD4 200-249 cells per microliter compared with patients with higher baseline CD4. There was no significant difference in time to confirmed CD4 <200 between biannual and annual CD4 measurement (P = 0.336).
CONCLUSIONS: Annual CD4 monitoring in virally suppressed HIV patients with a baseline CD4 ≥250 cells per microliter may be sufficient for clinical management.
METHODS: Patients initiating cART between 2006 and 2013 were included. TI was defined as stopping cART for >1 day. Treatment failure was defined as confirmed virological, immunological or clinical failure. Time to treatment failure during cART was analysed using Cox regression, not including periods off treatment. Covariables with P < 0.10 in univariable analyses were included in multivariable analyses, where P < 0.05 was considered statistically significant.
RESULTS: Of 4549 patients from 13 countries in Asia, 3176 (69.8%) were male and the median age was 34 years. A total of 111 (2.4%) had TIs due to AEs and 135 (3.0%) had TIs for other reasons. Median interruption times were 22 days for AE and 148 days for non-AE TIs. In multivariable analyses, interruptions >30 days were associated with failure (31-180 days HR = 2.66, 95%CI (1.70-4.16); 181-365 days HR = 6.22, 95%CI (3.26-11.86); and >365 days HR = 9.10, 95% CI (4.27-19.38), all P < 0.001, compared to 0-14 days). Reasons for previous TI were not statistically significant (P = 0.158).
CONCLUSIONS: Duration of interruptions of more than 30 days was the key factor associated with large increases in subsequent risk of treatment failure. If TI is unavoidable, its duration should be minimised to reduce the risk of failure after treatment resumption.
METHODS: Nevirapine population pharmacokinetics was modelled with Pmetrics. A total of 708 observations from 112 patients were included in the model building and validation analysis. Evaluation of the model was based on a visual inspection of observed versus predicted (population and individual) concentrations and plots weighted residual error versus concentrations. Accuracy and robustness of the model were evaluated by visual predictive check (VPC). The median parameters' estimates obtained from the final model were used to predict individual nevirapine plasma area-under-curve (AUC) in the validation dataset. The Bland-Altman plot was used to compare the AUC predicted with trapezoidal AUC.
RESULTS: The median nevirapine clearance was of 2.92 L/h, the median rate of absorption was 2.55/h and the volume of distribution was 78.23 L. Nevirapine pharmacokinetics were best described by one-compartmental with first-order absorption model and a lag-time. Weighted residuals for the model selected were homogenously distributed over the concentration and time range. The developed model adequately estimated AUC.
CONCLUSIONS: In conclusion, a model to describe the pharmacokinetics of nevirapine was developed. The developed model adequately describes nevirapine population pharmacokinetics in HIV-infected patients in Malaysia.
METHODS: We investigated serum creatinine (S-Cr) monitoring rates before and during ART and the incidence and prevalence of renal dysfunction after starting TDF by using data from a regional cohort of HIV-infected individuals in the Asia-Pacific. Time to renal dysfunction was defined as time from TDF initiation to the decline in estimated glomerular filtration rate (eGFR) to <60 ml/min/1.73m2 with >30% reduction from baseline using the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation or the decision to stop TDF for reported TDF-nephrotoxicity. Predictors of S-Cr monitoring rates were assessed by Poisson regression and risk factors for developing renal dysfunction were assessed by Cox regression.
RESULTS: Among 2,425 patients who received TDF, S-Cr monitoring rates increased from 1.01 to 1.84 per person per year after starting TDF (incidence rate ratio 1.68, 95%CI 1.62-1.74, p <0.001). Renal dysfunction on TDF occurred in 103 patients over 5,368 person-years of TDF use (4.2%; incidence 1.75 per 100 person-years). Risk factors for developing renal dysfunction included older age (>50 vs. ≤30, hazard ratio [HR] 5.39, 95%CI 2.52-11.50, p <0.001; and using PI-based regimen (HR 1.93, 95%CI 1.22-3.07, p = 0.005). Having an eGFR prior to TDF (pre-TDF eGFR) of ≥60 ml/min/1.73m2 showed a protective effect (HR 0.38, 95%CI, 0.17-0.85, p = 0.018).
CONCLUSIONS: Renal dysfunction on commencing TDF use was not common, however, older age, lower baseline eGFR and PI-based ART were associated with higher risk of renal dysfunction during TDF use in adult HIV-infected individuals in the Asia-Pacific region.
METHODS: Of the 37 sites that participated in the randomised, open-label, non-inferiority SECOND-LINE study, eight sites from five countries (Argentina, India, Malaysia, South Africa, and Thailand) participated in the body composition substudy. All sites had a dual energy x-ray absorptiometry (DXA) scanner and all participants enrolled in SECOND-LINE were eligible for inclusion in the substudy. Participants were randomly assigned (1:1), via a computer-generated allocation schedule, to receive either ritonavir-boosted lopinavir plus raltegravir (raltegravir group) or ritonavir-boosted lopinavir plus two or three N(t)RTIs (N[t]RTI group). Randomisation was stratified by site and screening HIV-1 RNA. Participants and investigators were not masked to group assignment, but allocation was concealed until after interventions were assigned. DXA scans were done at weeks 0, 48, and 96. The primary endpoint was mean percentage and absolute change in peripheral limb fat from baseline to week 96. We did intention-to-treat analyses of available data. This substudy is registered with ClinicalTrials.gov, number NCT01513122.
FINDINGS: Between Aug 1, 2010, and July 10, 2011, we recruited 211 participants into the substudy. The intention-to-treat population comprised 102 participants in the N(t)RTI group and 108 participants in the raltegravir group, of whom 91 and 105 participants, respectively, reached 96 weeks. Mean percentage change in limb fat from baseline to week 96 was 16·8% (SD 32·6) in the N(t)RTI group and 28·0% (37·6) in the raltegravir group (mean difference 10·2%, 95% CI 0·1-20·4; p=0·048). Mean absolute change was 1·04 kg (SD 2·29) in the N(t)RTI group and 1·81 kg (2·50) in the raltegravir group (mean difference 0·6, 95% CI -0·1 to 1·3; p=0·10).
INTERPRETATION: Our findings suggest that for people with virological failure of a first-line regimen containing efavirenz plus tenofovir and lamivudine or emtricitabine, the WHO-recommended switch to a ritonavir-boosted protease inhibitor plus zidovudine (a thymidine analogue nucleoside reverse transcriptase inhibitor) and lamivudine might come at the cost of peripheral lipoatrophy. Further study could help to define specific groups of people who might benefit from a switch to an N(t)RTI-sparing second-line ART regimen.
FUNDING: The Kirby Institute and the Australian National Health and Medical Research Council.