METHODS: Blips were defined as detectable VL (≥ 50 copies/mL) preceded and followed by undetectable VL (<50 copies/mL). Virological failure (VF) was defined as two consecutive VL ≥50 copies/ml. Cox proportional hazard models of time to first VF after entry, were developed.
RESULTS: 5040 patients (AHOD n = 2597 and TAHOD n = 2521) were included; 910 (18%) of patients experienced blips. 744 (21%) and 166 (11%) of high- and middle/low-income participants, respectively, experienced blips ever. 711 (14%) experienced blips prior to virological failure. 559 (16%) and 152 (10%) of high- and middle/low-income participants, respectively, experienced blips prior to virological failure. VL testing occurred at a median frequency of 175 and 91 days in middle/low- and high-income sites, respectively. Longer time to VF occurred in middle/low income sites, compared with high-income sites (adjusted hazards ratio (AHR) 0.41; p<0.001), adjusted for year of first cART, Hepatitis C co-infection, cART regimen, and prior blips. Prior blips were not a significant predictor of VF in univariate analysis (AHR 0.97, p = 0.82). Differing magnitudes of blips were not significant in univariate analyses as predictors of virological failure (p = 0.360 for blip 50-≤1000, p = 0.309 for blip 50-≤400 and p = 0.300 for blip 50-≤200). 209 of 866 (24%) patients were switched to an alternate regimen in the setting of a blip.
CONCLUSION: Despite a lower proportion of blips occurring in low/middle-income settings, no significant difference was found between settings. Nonetheless, a substantial number of participants were switched to alternative regimens in the setting of blips.
METHODS: HIV+ patients from the Australian HIV Observational Database (AHOD) and the TREAT Asia HIV Observational Database (TAHOD) meeting specific criteria were included. In these analyses Asian and Caucasian status were defined by cohort. Factors associated with a low CD4:CD8 ratio (cutoff <0.2) prior to ART commencement, and with achieving a normal CD4:CD8 ratio (>1) at 12 and 24 months post ART commencement were assessed using logistic regression.
RESULTS: There were 591 patients from AHOD and 2,620 patients from TAHOD who met the inclusion criteria. TAHOD patients had a significantly (P<0.001) lower odds of having a baseline (prior to ART initiation) CD4:CD8 ratio greater than 0.2. After 12 months of ART, AHOD patients were more than twice as likely to achieve a normal CD4:CD8 ratio compared to TAHOD patients (15% versus 6%). However, after adjustment for confounding factors there was no significant difference between cohorts in the odds of achieving a CD4:CD8 ratio >1 (P=0.475).
CONCLUSIONS: We found a significantly lower CD4:CD8 ratio prior to commencing ART in TAHOD compared to AHOD even after adjusting for confounders. However, after adjustment, there was no significant difference between the cohorts in odds of achieving normal ratio. Baseline CD4+ and CD8+ counts seem to be the main driver for this difference between these two populations.
METHODS: Data from two regional cohort observational databases were analyzed for trends in median CD4 cell counts at ART initiation and the proportion of late ART initiation (CD4 cell counts <200 cells/mm(3) or prior AIDS diagnosis). Predictors for late ART initiation and mortality were determined.
RESULTS: A total of 2737 HIV-positive ART-naïve patients from 22 sites in 13 Asian countries and territories were eligible. The overall median (IQR) CD4 cell count at ART initiation was 150 (46-241) cells/mm(3). Median CD4 cell counts at ART initiation increased over time, from a low point of 115 cells/mm(3) in 2008 to a peak of 302 cells/mm(3) after 2011 (p for trend 0.002). The proportion of patients with late ART initiation significantly decreased over time from 79.1% before 2007 to 36.3% after 2011 (p for trend <0.001). Factors associated with late ART initiation were year of ART initiation (e.g. 2010 vs. before 2007; OR 0.40, 95% CI 0.27-0.59; p<0.001), sex (male vs. female; OR 1.51, 95% CI 1.18-1.93; p=0.001) and HIV exposure risk (heterosexual vs. homosexual; OR 1.66, 95% CI 1.24-2.23; p=0.001 and intravenous drug use vs. homosexual; OR 3.03, 95% CI 1.77-5.21; p<0.001). Factors associated with mortality after ART initiation were late ART initiation (HR 2.13, 95% CI 1.19-3.79; p=0.010), sex (male vs. female; HR 2.12, 95% CI 1.31-3.43; p=0.002), age (≥51 vs. ≤30 years; HR 3.91, 95% CI 2.18-7.04; p<0.001) and hepatitis C serostatus (positive vs. negative; HR 2.48, 95% CI 1.-4.36; p=0.035).
CONCLUSIONS: Median CD4 cell count at ART initiation among Asian patients significantly increases over time but the proportion of patients with late ART initiation is still significant. ART initiation at higher CD4 cell counts remains a challenge. Strategic interventions to increase earlier diagnosis of HIV infection and prompt more rapid linkage to ART must be implemented.
SETTING: An Asian cohort in 16 pediatric HIV services across 6 countries.
METHODS: From 2005 to 2014, patients younger than 20 years who achieved virologic suppression and had subsequent viral load testing were included. Early virologic failure was defined as a HIV RNA ≥1000 copies per milliliter within 12 months of virologic suppression, and late virologic as a HIV RNA ≥1000 copies per milliliter after 12 months following virologic suppression. Characteristics at combination antiretroviral therapy initiation and virologic suppression were described, and a competing risk time-to-event analysis was used to determine cumulative incidence of virologic failure and factors at virologic suppression associated with early and late virologic failure.
RESULTS: Of 1105 included in the analysis, 182 (17.9%) experienced virologic failure. The median age at virologic suppression was 6.9 years, and the median time to virologic failure was 24.6 months after virologic suppression. The incidence rate for a first virologic failure event was 3.3 per 100 person-years. Factors at virologic suppression associated with late virologic failure included older age, mostly rural clinic setting, tuberculosis, protease inhibitor-based regimens, and early virologic failure. No risk factors were identified for early virologic failure.
CONCLUSIONS: Around 1 in 5 experienced virologic failure in our cohort after achieving virologic suppression. Targeted interventions to manage complex treatment scenarios, including adolescents, tuberculosis coinfection, and those with poor virologic control are required.
METHODS: Regional Asian data (2001-2016) were analyzed to describe PHIVA who experienced ≥2 weeks of lamivudine or emtricitabine monotherapy or treatment interruption and trends in CD4 count and HIV viral load during and after episodes. Survival analyses were used for World Health Organization (WHO) stage III/IV clinical and immunologic event-free survival during monotherapy or treatment interruption, and a Poisson regression to determine factors associated with monotherapy or treatment interruption.
RESULTS: Of 3,448 PHIVA, 84 (2.4%) experienced 94 monotherapy episodes, and 147 (4.3%) experienced 174 treatment interruptions. Monotherapy was associated with older age, HIV RNA >400 copies/mL, younger age at ART initiation, and exposure to ≥2 combination ART regimens. Treatment interruption was associated with CD4 count <350 cells/μL, HIV RNA ≥1,000 copies/mL, ART adverse event, and commencing ART age ≥10 years compared with age <3 years. WHO clinical stage III/IV 1-year event-free survival was 96% and 85% for monotherapy and treatment interruption cohorts, respectively. WHO immunologic stage III/IV 1-year event-free survival was 52% for both cohorts. Those who experienced monotherapy or treatment interruption for more than 6 months had worse immunologic and virologic outcomes.
CONCLUSIONS: Until challenges of treatment adherence, engagement in care, and combination ART durability/tolerability are met, monotherapy and treatment interruption will lead to poor long-term outcomes.
METHODS: Data on children with perinatally acquired HIV aged <18 years on first-line, non-nucleoside reverse transcriptase inhibitor-based cART with viral suppression (two consecutive pVL <400 copies/mL over a six-month period) were included from a regional cohort study; those exposed to prior mono- or dual antiretroviral treatment were excluded. Frequency of pVL monitoring was determined at the site-level based on the median rate of pVL measurement: annual 0.75 to 1.5, and semi-annual >1.5 tests/patient/year. Treatment failure was defined as virologic failure (two consecutive pVL >1000 copies/mL), change of antiretroviral drug class, or death. Baseline was the date of the second consecutive pVL <400 copies/mL. Competing risk regression models were used to identify predictors of treatment failure.
RESULTS: During January 2008 to March 2015, there were 1220 eligible children from 10 sites that performed at least annual pVL monitoring, 1042 (85%) and 178 (15%) were from sites performing annual (n = 6) and semi-annual pVL monitoring (n = 4) respectively. Pre-cART, 675 children (55%) had World Health Organization clinical stage 3 or 4, the median nadir CD4 percentage was 9%, and the median pVL was 5.2 log10 copies/mL. At baseline, the median age was 9.2 years, 64% were on nevirapine-based regimens, the median cART duration was 1.6 years, and the median CD4 percentage was 26%. Over the follow-up period, 258 (25%) CLWH with annual and 40 (23%) with semi-annual pVL monitoring developed treatment failure, corresponding to incidence rates of 5.4 (95% CI: 4.8 to 6.1) and 4.3 (95% CI: 3.1 to 5.8) per 100 patient-years of follow-up respectively (p = 0.27). In multivariable analyses, the frequency of pVL monitoring was not associated with treatment failure (adjusted hazard ratio: 1.12; 95% CI: 0.80 to 1.59).
CONCLUSIONS: Annual compared to semi-annual pVL monitoring was not associated with an increased risk of treatment failure in our cohort of virally suppressed children with perinatally acquired HIV on first-line NNRTI-based cART.
METHODS: CLHIV aged <18 years, who were on first-line cART for ≥12 months, and had virological suppression (two consecutive plasma viral load [pVL] <50 copies/mL) were included. Those who started treatment with mono/dual antiretroviral therapy, had a history of treatment interruption >14 days, or received treatment and care at sites with a pVL lower limit of detection >50 copies/mL were excluded. LLV was defined as a pVL 50 to 1000 copies/mL, and VF as a single pVL >1000 copies/mL. Baseline was the time of the second pVL
METHODS: We used Cox regression to analyze data of a cohort of Asian children.
RESULTS: A total of 2608 children were included; median age at cART was 5.7 years. Time-updated weight for age z score < -3 was associated with mortality (P < 0.001) independent of CD4% and < -2 was associated with immunological failure (P ≤ 0.03) independent of age at cART.
CONCLUSIONS: Weight monitoring provides useful data to inform clinical management of children on cART in resource-limited settings.
Methods: Study end points were as follows: (1) a CD4 count <200 cells/mm3 followed by a CD4 count ≥200 cells/mm3 (transient CD4 <200); (2) CD4 count <200 cells/mm3 confirmed within 6 months (confirmed CD4 <200); and (3) a new or recurrent World Health Organization (WHO) stage 3 or 4 illness (clinical failure). Kaplan-Meier curves and Cox regression were used to evaluate rates and predictors of transient CD4 <200, confirmed CD4 <200, and clinical failure among virally suppressed children aged 5-15 years who were enrolled in the TREAT Asia Pediatric HIV Observational Database.
Results: Data from 967 children were included in the analysis. At the time of confirmed viral suppression, median age was 10.2 years, 50.4% of children were female, and 95.4% were perinatally infected with HIV. Median CD4 cell count was 837 cells/mm3, and 54.8% of children were classified as having WHO stage 3 or 4 disease. In total, 18 transient CD4 <200 events, 2 confirmed CD4 <200 events, and10 clinical failures occurred at rates of 0.73 (95% confidence interval [95% CI], 0.46-1.16), 0.08 (95% CI, 0.02-0.32), and 0.40 (95% CI, 0.22-0.75) events per 100 patient-years, respectively. CD4 <500 cells/mm3 at the time of viral suppression confirmation was associated with higher rates of both CD4 outcomes.
Conclusions: Regular CD4 testing may be unnecessary for virally suppressed children aged 5-15 years with CD4 ≥500 cells/mm3.
METHODS: In a regional HIV observational cohort in the Asia-Pacific region, patients with viral suppression (2 consecutive viral loads <400 copies/mL) and a CD4 count ≥200 cells per microliter who had CD4 testing 6 monthly were analyzed. Main study end points were occurrence of 1 CD4 count <200 cells per microliter (single CD4 <200) and 2 CD4 counts <200 cells per microliter within a 6-month period (confirmed CD4 <200). A comparison of time with single and confirmed CD4 <200 with biannual or annual CD4 assessment was performed by generating a hypothetical group comprising the same patients with annual CD4 testing by removing every second CD4 count.
RESULTS: Among 1538 patients, the rate of single CD4 <200 was 3.45/100 patient-years and of confirmed CD4 <200 was 0.77/100 patient-years. During 5 years of viral suppression, patients with baseline CD4 200-249 cells per microliter were significantly more likely to experience confirmed CD4 <200 compared with patients with higher baseline CD4 [hazard ratio, 55.47 (95% confidence interval: 7.36 to 418.20), P < 0.001 versus baseline CD4 ≥500 cells/μL]. Cumulative probabilities of confirmed CD4 <200 was also higher in patients with baseline CD4 200-249 cells per microliter compared with patients with higher baseline CD4. There was no significant difference in time to confirmed CD4 <200 between biannual and annual CD4 measurement (P = 0.336).
CONCLUSIONS: Annual CD4 monitoring in virally suppressed HIV patients with a baseline CD4 ≥250 cells per microliter may be sufficient for clinical management.
METHODS: HIV-infected adults enrolled in the TREAT Asia HIV Observational Database were eligible if they had an HIV RNA measurement documented at the time of ART initiation. The dataset was randomly split into a derivation data set (75% of patients) and a validation data set (25%). Factors associated with pre-treatment HIV RNA <100,000 copies/mL were evaluated by logistic regression adjusted for study site. A prediction model and prediction scores were created.
RESULTS: A total of 2592 patients were enrolled for the analysis. Median [interquartile range (IQR)] age was 35.8 (29.9-42.5) years; CD4 count was 147 (50-248) cells/mm3; and pre-treatment HIV RNA was 100,000 (34,045-301,075) copies/mL. Factors associated with pre-treatment HIV RNA <100,000 copies/mL were age <30 years [OR 1.40 vs. 41-50 years; 95% confidence interval (CI) 1.10-1.80, p = 0.01], body mass index >30 kg/m2(OR 2.4 vs. <18.5 kg/m2; 95% CI 1.1-5.1, p = 0.02), anemia (OR 1.70; 95% CI 1.40-2.10, p 350 cells/mm3(OR 3.9 vs. <100 cells/mm3; 95% CI 2.0-4.1, p 2000 cells/mm3(OR 1.7 vs. <1000 cells/mm3; 95% CI 1.3-2.3, p 25 yielded the sensitivity of 46.7%, specificity of 79.1%, positive predictive value of 67.7%, and negative predictive value of 61.2% for prediction of pre-treatment HIV RNA <100,000 copies/mL among derivation patients.
CONCLUSION: A model prediction for pre-treatment HIV RNA <100,000 copies/mL produced an area under the ROC curve of 0.70. A larger sample size for prediction model development as well as for model validation is warranted.
METHODS: We investigated serum creatinine (S-Cr) monitoring rates before and during ART and the incidence and prevalence of renal dysfunction after starting TDF by using data from a regional cohort of HIV-infected individuals in the Asia-Pacific. Time to renal dysfunction was defined as time from TDF initiation to the decline in estimated glomerular filtration rate (eGFR) to <60 ml/min/1.73m2 with >30% reduction from baseline using the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation or the decision to stop TDF for reported TDF-nephrotoxicity. Predictors of S-Cr monitoring rates were assessed by Poisson regression and risk factors for developing renal dysfunction were assessed by Cox regression.
RESULTS: Among 2,425 patients who received TDF, S-Cr monitoring rates increased from 1.01 to 1.84 per person per year after starting TDF (incidence rate ratio 1.68, 95%CI 1.62-1.74, p <0.001). Renal dysfunction on TDF occurred in 103 patients over 5,368 person-years of TDF use (4.2%; incidence 1.75 per 100 person-years). Risk factors for developing renal dysfunction included older age (>50 vs. ≤30, hazard ratio [HR] 5.39, 95%CI 2.52-11.50, p <0.001; and using PI-based regimen (HR 1.93, 95%CI 1.22-3.07, p = 0.005). Having an eGFR prior to TDF (pre-TDF eGFR) of ≥60 ml/min/1.73m2 showed a protective effect (HR 0.38, 95%CI, 0.17-0.85, p = 0.018).
CONCLUSIONS: Renal dysfunction on commencing TDF use was not common, however, older age, lower baseline eGFR and PI-based ART were associated with higher risk of renal dysfunction during TDF use in adult HIV-infected individuals in the Asia-Pacific region.
METHODS: Adults living with HIV enrolled in a regional observational cohort in Asia who had initiated combination antiretroviral therapy (cART) were included in the analysis. Factors associated with new TB diagnoses after cohort entry and survival after cART initiation were analysed using Cox regression, stratified by site.
RESULTS: A total of 7355 patients from 12 countries enrolled into the cohort between 2003 and 2016 were included in the study. There were 368 reported cases of TB after cohort entry with an incidence rate of 0.99 per 100 person-years (/100 pys). Multivariate analyses adjusted for viral load (VL), CD4 count, body mass index (BMI) and cART duration showed that CTX reduced the hazard for new TB infection by 28% (HR 0.72, 95% CI l 0.56, 0.93). Mortality after cART initiation was 0.85/100 pys, with a median follow-up time of 4.63 years. Predictors of survival included age, female sex, hepatitis C co-infection, TB diagnosis, HIV VL, CD4 count and BMI.
CONCLUSIONS: CTX was associated with a reduction in the hazard for new TB infection but did not impact survival in our Asian cohort. The potential preventive effect of CTX against TB during periods of severe immunosuppression should be further explored.
METHODS: Patients initiating cART between 2006 and 2013 were included. TI was defined as stopping cART for >1 day. Treatment failure was defined as confirmed virological, immunological or clinical failure. Time to treatment failure during cART was analysed using Cox regression, not including periods off treatment. Covariables with P < 0.10 in univariable analyses were included in multivariable analyses, where P < 0.05 was considered statistically significant.
RESULTS: Of 4549 patients from 13 countries in Asia, 3176 (69.8%) were male and the median age was 34 years. A total of 111 (2.4%) had TIs due to AEs and 135 (3.0%) had TIs for other reasons. Median interruption times were 22 days for AE and 148 days for non-AE TIs. In multivariable analyses, interruptions >30 days were associated with failure (31-180 days HR = 2.66, 95%CI (1.70-4.16); 181-365 days HR = 6.22, 95%CI (3.26-11.86); and >365 days HR = 9.10, 95% CI (4.27-19.38), all P < 0.001, compared to 0-14 days). Reasons for previous TI were not statistically significant (P = 0.158).
CONCLUSIONS: Duration of interruptions of more than 30 days was the key factor associated with large increases in subsequent risk of treatment failure. If TI is unavoidable, its duration should be minimised to reduce the risk of failure after treatment resumption.