METHODS: Prospectively collected longitudinal data from patients in Thailand, Hong Kong, Malaysia, Japan, Taiwan, and South Korea were provided for analysis. Covariates included demographics, hepatitis B and C coinfections, baseline CD4 T lymphocyte count, and plasma HIV-1 RNA levels. Clinical deterioration (a new diagnosis of Centers for Disease Control and Prevention category B/AIDS-defining illness or death) was assessed by proportional hazards models. Surrogate endpoints were 12-month change in CD4 cell count and virologic suppression post therapy, evaluated by linear and logistic regression, respectively.
RESULTS: Of 1105 patients, 1036 (93.8%) infected with CRF01_AE or subtype B were eligible for inclusion in clinical deterioration analyses and contributed 1546.7 person-years of follow-up (median: 413 days, interquartile range: 169-672 days). Patients >40 years demonstrated smaller immunological increases (P = 0.002) and higher risk of clinical deterioration (hazard ratio = 2.17; P = 0.008). Patients with baseline CD4 cell counts >200 cells per microliter had lower risk of clinical deterioration (hazard ratio = 0.373; P = 0.003). A total of 532 patients (48.1% of eligible) had CD4 counts available at baseline and 12 months post therapy for inclusion in immunolgic analyses. Patients infected with subtype B had larger increases in CD4 counts at 12 months (P = 0.024). A total of 530 patients (48.0% of eligible) were included in virological analyses with no differences in response found between genotypes.
CONCLUSIONS: Results suggest that patients infected with CRF01_AE have reduced immunologic response to therapy at 12 months, compared with subtype B-infected counterparts. Clinical deterioration was associated with low baseline CD4 counts and older age. The lack of differences in virologic outcomes suggests that all patients have opportunities for virological suppression.
DESIGN: A collaboration of 12 prospective cohort studies from Europe and the United States (the HIV-CAUSAL Collaboration) that includes 62 760 HIV-infected, therapy-naive individuals followed for an average of 3.3 years. Inverse probability weighting of marginal structural models was used to adjust for measured confounding by indication.
RESULTS: Two thousand and thirty-nine individuals died during the follow-up. The mortality hazard ratio was 0.48 (95% confidence interval 0.41-0.57) for cART initiation versus no initiation. In analyses stratified by CD4 cell count at baseline, the corresponding hazard ratios were 0.29 (0.22-0.37) for less than 100 cells/microl, 0.33 (0.25-0.44) for 100 to less than 200 cells/microl, 0.38 (0.28-0.52) for 200 to less than 350 cells/microl, 0.55 (0.41-0.74) for 350 to less than 500 cells/microl, and 0.77 (0.58-1.01) for 500 cells/microl or more. The estimated hazard ratio varied with years since initiation of cART from 0.57 (0.49-0.67) for less than 1 year since initiation to 0.21 (0.14-0.31) for 5 years or more (P value for trend <0.001).
CONCLUSION: We estimated that cART halved the average mortality rate in HIV-infected individuals. The mortality reduction was greater in those with worse prognosis at the start of follow-up.
METHODS: Long-term LTFU was defined as LTFU occurring after 5 years on ART. LTFU was defined as (1) patients not seen in the previous 12 months; and (2) patients not seen in the previous 6 months. Factors associated with LTFU were analysed using competing risk regression.
RESULTS: Under the 12-month definition, the LTFU rate was 2.0 per 100 person-years (PY) [95% confidence interval (CI) 1.8-2.2 among 4889 patients included in the study. LTFU was associated with age > 50 years [sub-hazard ratio (SHR) 1.64; 95% CI 1.17-2.31] compared with 31-40 years, viral load ≥ 1000 copies/mL (SHR 1.86; 95% CI 1.16-2.97) compared with viral load < 1000 copies/mL, and hepatitis C coinfection (SHR 1.48; 95% CI 1.06-2.05). LTFU was less likely to occur in females, in individuals with higher CD4 counts, in those with self-reported adherence ≥ 95%, and in those living in high-income countries. The 6-month LTFU definition produced an incidence rate of 3.2 per 100 PY (95% CI 2.9-3.4 and had similar associations but with greater risks of LTFU for ART initiation in later years (2006-2009: SHR 2.38; 95% CI 1.93-2.94; and 2010-2011: SHR 4.26; 95% CI 3.17-5.73) compared with 2003-2005.
CONCLUSIONS: The long-term LTFU rate in our cohort was low, with older age being associated with LTFU. The increased risk of LTFU with later years of ART initiation in the 6-month analysis, but not the 12-month analysis, implies that there was a possible move towards longer HIV clinic scheduling in Asia.
METHODS: PLHIV enrolled in the Therapeutics, Research, Education and AIDS Training in Asia (TREAT Asia) HIV Observational Database (TAHOD) who initiated ART with a CD4 count 1 year were censored at 12 months. Competing risk regression was used to analyse risk factors with loss to follow-up as a competing risk.
RESULTS: A total of 1813 PLHIV were included in the study, of whom 74% were male. With 73 (4%) deaths, the overall first-year mortality rate was 4.27 per 100 person-years (PY). Thirty-eight deaths (52%) were AIDS-related, 10 (14%) were immune reconstituted inflammatory syndrome (IRIS)-related, 13 (18%) were non-AIDS-related and 12 (16%) had an unknown cause. Risk factors included having a body mass index (BMI) CD4 count (51-100 cells/μL: SHR 0.28; 95% CI 0.14-0.55; and > 100 cells/μL: SHR 0.12; 95% CI 0.05-0.26) was associated with reduced hazard for mortality compared to CD4 count ≤ 25 cells/μL.
CONCLUSIONS: Fifty-two per cent of early deaths were AIDS-related. Efforts to initiate ART at CD4 counts > 50 cell/μL are associated with improved short-term survival rates, even in those with late stages of HIV disease.
METHODS: We did a cohort analysis of TB cases in SECOND-LINE. TB cases included any clinical or laboratory-confirmed diagnoses and/or commencement of treatment for TB after randomization. Baseline factors associated with TB were analyzed using Cox regression stratified by site.
RESULTS: TB cases occurred at sites in Argentina, India, Malaysia, Nigeria, South Africa, and Thailand, in a cohort of 355 of the 541 SECOND-LINE participants. Overall, 20 cases of TB occurred, an incidence rate of 3.4 per 100 person-years (95% CI: 2.1 to 5.1). Increased TB risk was associated with a low CD4+-cell count (≤200 cells/μL), high viral load (>200 copies/mL), low platelet count (<150 ×109/L), and low total serum cholesterol (≤4.5 mmol/L) at baseline. An increased risk of death was associated with TB, adjusted for CD4, platelets, and cholesterol. A low CD4+-cell count was significantly associated with incident TB, mortality, other AIDS diagnoses, and virologic failure.
DISCUSSION: The risk of TB remains elevated in PLHIV in the setting of second-line HIV therapy in TB endemic regions. TB was associated with a greater risk of death. Finding that low CD4+ T-cell count was significantly associated with poor outcomes in this population supports the value of CD4+ monitoring in HIV clinical management.
METHODS: Data on children with perinatally acquired HIV aged <18 years on first-line, non-nucleoside reverse transcriptase inhibitor-based cART with viral suppression (two consecutive pVL <400 copies/mL over a six-month period) were included from a regional cohort study; those exposed to prior mono- or dual antiretroviral treatment were excluded. Frequency of pVL monitoring was determined at the site-level based on the median rate of pVL measurement: annual 0.75 to 1.5, and semi-annual >1.5 tests/patient/year. Treatment failure was defined as virologic failure (two consecutive pVL >1000 copies/mL), change of antiretroviral drug class, or death. Baseline was the date of the second consecutive pVL <400 copies/mL. Competing risk regression models were used to identify predictors of treatment failure.
RESULTS: During January 2008 to March 2015, there were 1220 eligible children from 10 sites that performed at least annual pVL monitoring, 1042 (85%) and 178 (15%) were from sites performing annual (n = 6) and semi-annual pVL monitoring (n = 4) respectively. Pre-cART, 675 children (55%) had World Health Organization clinical stage 3 or 4, the median nadir CD4 percentage was 9%, and the median pVL was 5.2 log10 copies/mL. At baseline, the median age was 9.2 years, 64% were on nevirapine-based regimens, the median cART duration was 1.6 years, and the median CD4 percentage was 26%. Over the follow-up period, 258 (25%) CLWH with annual and 40 (23%) with semi-annual pVL monitoring developed treatment failure, corresponding to incidence rates of 5.4 (95% CI: 4.8 to 6.1) and 4.3 (95% CI: 3.1 to 5.8) per 100 patient-years of follow-up respectively (p = 0.27). In multivariable analyses, the frequency of pVL monitoring was not associated with treatment failure (adjusted hazard ratio: 1.12; 95% CI: 0.80 to 1.59).
CONCLUSIONS: Annual compared to semi-annual pVL monitoring was not associated with an increased risk of treatment failure in our cohort of virally suppressed children with perinatally acquired HIV on first-line NNRTI-based cART.
METHODS: Patients enrolled in the TREAT Asia HIV Observational Database cohort and on cART for more than six months were analysed. Comorbidities included hypertension, diabetes, dyslipidaemia and impaired renal function. Treatment outcomes of patients ≥50 years of age with comorbidities were compared with those <50 years and those ≥50 years without comorbidities. We analysed 5411 patients with virological failure and 5621 with immunologic failure. Our failure outcomes were defined to be in-line with the World Health Organization 2016 guidelines. Cox regression analysis was used to analyse time to first virological and immunological failure.
RESULTS: The incidence of virologic failure was 7.72/100 person-years. Virological failure was less likely in patients with better adherence and higher CD4 count at cART initiation. Those acquiring HIV through intravenous drug use were more likely to have virological failure compared to those infected through heterosexual contact. On univariate analysis, patients aged <50 years without comorbidities were more likely to experience virological failure than those aged ≥50 years with comorbidities (hazard ratio 1.75, 95% confidence interval (CI) 1.31 to 2.33, p CD4 response.
METHODS: We compared these regimens with respect to clinical, immunologic, and virologic outcomes using data from prospective studies of human immunodeficiency virus (HIV)-infected individuals in Europe and the United States in the HIV-CAUSAL Collaboration, 2004-2013. Antiretroviral therapy-naive and AIDS-free individuals were followed from the time they started a lopinavir or an atazanavir regimen. We estimated the 'intention-to-treat' effect for atazanavir vs lopinavir regimens on each of the outcomes.
RESULTS: A total of 6668 individuals started a lopinavir regimen (213 deaths, 457 AIDS-defining illnesses or deaths), and 4301 individuals started an atazanavir regimen (83 deaths, 157 AIDS-defining illnesses or deaths). The adjusted intention-to-treat hazard ratios for atazanavir vs lopinavir regimens were 0.70 (95% confidence interval [CI], .53-.91) for death, 0.67 (95% CI, .55-.82) for AIDS-defining illness or death, and 0.91 (95% CI, .84-.99) for virologic failure at 12 months. The mean 12-month increase in CD4 count was 8.15 (95% CI, -.13 to 16.43) cells/µL higher in the atazanavir group. Estimates differed by NRTI backbone.
CONCLUSIONS: Our estimates are consistent with a lower mortality, a lower incidence of AIDS-defining illness, a greater 12-month increase in CD4 cell count, and a smaller risk of virologic failure at 12 months for atazanavir compared with lopinavir regimens.
METHODS: We used Cox regression to analyze data of a cohort of Asian children.
RESULTS: A total of 2608 children were included; median age at cART was 5.7 years. Time-updated weight for age z score < -3 was associated with mortality (P < 0.001) independent of CD4% and < -2 was associated with immunological failure (P ≤ 0.03) independent of age at cART.
CONCLUSIONS: Weight monitoring provides useful data to inform clinical management of children on cART in resource-limited settings.
METHODS: Blips were defined as detectable VL (≥ 50 copies/mL) preceded and followed by undetectable VL (<50 copies/mL). Virological failure (VF) was defined as two consecutive VL ≥50 copies/ml. Cox proportional hazard models of time to first VF after entry, were developed.
RESULTS: 5040 patients (AHOD n = 2597 and TAHOD n = 2521) were included; 910 (18%) of patients experienced blips. 744 (21%) and 166 (11%) of high- and middle/low-income participants, respectively, experienced blips ever. 711 (14%) experienced blips prior to virological failure. 559 (16%) and 152 (10%) of high- and middle/low-income participants, respectively, experienced blips prior to virological failure. VL testing occurred at a median frequency of 175 and 91 days in middle/low- and high-income sites, respectively. Longer time to VF occurred in middle/low income sites, compared with high-income sites (adjusted hazards ratio (AHR) 0.41; p<0.001), adjusted for year of first cART, Hepatitis C co-infection, cART regimen, and prior blips. Prior blips were not a significant predictor of VF in univariate analysis (AHR 0.97, p = 0.82). Differing magnitudes of blips were not significant in univariate analyses as predictors of virological failure (p = 0.360 for blip 50-≤1000, p = 0.309 for blip 50-≤400 and p = 0.300 for blip 50-≤200). 209 of 866 (24%) patients were switched to an alternate regimen in the setting of a blip.
CONCLUSION: Despite a lower proportion of blips occurring in low/middle-income settings, no significant difference was found between settings. Nonetheless, a substantial number of participants were switched to alternative regimens in the setting of blips.
DESIGN: Prospective studies of HIV-infected individuals in Europe and the US included in the HIV-CAUSAL Collaboration.
METHODS: Antiretroviral therapy-naive and AIDS-free individuals were followed from the time they started an NRTI, efavirenz or nevirapine, classified as following one or both types of regimens at baseline, and censored when they started an ineligible drug or at 6 months if their regimen was not yet complete. We estimated the 'intention-to-treat' effect for nevirapine versus efavirenz regimens on clinical, immunologic, and virologic outcomes. Our models included baseline covariates and adjusted for potential bias introduced by censoring via inverse probability weighting.
RESULTS: A total of 15 336 individuals initiated an efavirenz regimen (274 deaths, 774 AIDS-defining illnesses) and 8129 individuals initiated a nevirapine regimen (203 deaths, 441 AIDS-defining illnesses). The intention-to-treat hazard ratios [95% confidence interval (CI)] for nevirapine versus efavirenz regimens were 1.59 (1.27, 1.98) for death and 1.28 (1.09, 1.50) for AIDS-defining illness. Individuals on nevirapine regimens experienced a smaller 12-month increase in CD4 cell count by 11.49 cells/μl and were 52% more likely to have virologic failure at 12 months as those on efavirenz regimens.
CONCLUSIONS: Our intention-to-treat estimates are consistent with a lower mortality, a lower incidence of AIDS-defining illness, a larger 12-month increase in CD4 cell count, and a smaller risk of virologic failure at 12 months for efavirenz compared with nevirapine.