METHODS: Data collected 2001 to 2016 from PHIVA 10-19 years of age within a regional Asian cohort were analyzed using competing risk time-to-event and Poisson regression analyses to describe the nature and incidence of morbidity events and hospitalizations and identify factors associated with disease-related, treatment-related and overall morbidity. Morbidity was defined according to World Health Organization clinical staging criteria and U.S. National Institutes of Health Division of AIDS criteria.
RESULTS: A total 3,448 PHIVA contributed 17,778 person-years. Median age at HIV diagnosis was 5.5 years, and ART initiation was 6.9 years. There were 2,562 morbidity events and 307 hospitalizations. Cumulative incidence for any morbidity was 51.7%, and hospitalization was 10.0%. Early adolescence was dominated by disease-related infectious morbidity, with a trend toward noninfectious and treatment-related morbidity in later adolescence. Higher overall morbidity rates were associated with a CD4 count <350 cells/µL, HIV viral load ≥10,000 copies/mL and experiencing prior morbidity at age <10 years. Lower overall morbidity rates were found for those 15-19 years of age compared with 10-14 years and those who initiated ART at age 5-9 years compared with <5 or ≥10 years.
CONCLUSIONS: Half of our PHIVA cohort experienced a morbidity event, with a trend from disease-related infectious events to treatment-related and noninfectious events as PHIVA age. ART initiation to prevent immune system damage, optimize virologic control and minimize childhood morbidity are key to limiting adolescent morbidity.
METHODS: Data on children with perinatally acquired HIV aged <18 years on first-line, non-nucleoside reverse transcriptase inhibitor-based cART with viral suppression (two consecutive pVL <400 copies/mL over a six-month period) were included from a regional cohort study; those exposed to prior mono- or dual antiretroviral treatment were excluded. Frequency of pVL monitoring was determined at the site-level based on the median rate of pVL measurement: annual 0.75 to 1.5, and semi-annual >1.5 tests/patient/year. Treatment failure was defined as virologic failure (two consecutive pVL >1000 copies/mL), change of antiretroviral drug class, or death. Baseline was the date of the second consecutive pVL <400 copies/mL. Competing risk regression models were used to identify predictors of treatment failure.
RESULTS: During January 2008 to March 2015, there were 1220 eligible children from 10 sites that performed at least annual pVL monitoring, 1042 (85%) and 178 (15%) were from sites performing annual (n = 6) and semi-annual pVL monitoring (n = 4) respectively. Pre-cART, 675 children (55%) had World Health Organization clinical stage 3 or 4, the median nadir CD4 percentage was 9%, and the median pVL was 5.2 log10 copies/mL. At baseline, the median age was 9.2 years, 64% were on nevirapine-based regimens, the median cART duration was 1.6 years, and the median CD4 percentage was 26%. Over the follow-up period, 258 (25%) CLWH with annual and 40 (23%) with semi-annual pVL monitoring developed treatment failure, corresponding to incidence rates of 5.4 (95% CI: 4.8 to 6.1) and 4.3 (95% CI: 3.1 to 5.8) per 100 patient-years of follow-up respectively (p = 0.27). In multivariable analyses, the frequency of pVL monitoring was not associated with treatment failure (adjusted hazard ratio: 1.12; 95% CI: 0.80 to 1.59).
CONCLUSIONS: Annual compared to semi-annual pVL monitoring was not associated with an increased risk of treatment failure in our cohort of virally suppressed children with perinatally acquired HIV on first-line NNRTI-based cART.
METHODS: Prospectively collected longitudinal data from patients in Thailand, Hong Kong, Malaysia, Japan, Taiwan, and South Korea were provided for analysis. Covariates included demographics, hepatitis B and C coinfections, baseline CD4 T lymphocyte count, and plasma HIV-1 RNA levels. Clinical deterioration (a new diagnosis of Centers for Disease Control and Prevention category B/AIDS-defining illness or death) was assessed by proportional hazards models. Surrogate endpoints were 12-month change in CD4 cell count and virologic suppression post therapy, evaluated by linear and logistic regression, respectively.
RESULTS: Of 1105 patients, 1036 (93.8%) infected with CRF01_AE or subtype B were eligible for inclusion in clinical deterioration analyses and contributed 1546.7 person-years of follow-up (median: 413 days, interquartile range: 169-672 days). Patients >40 years demonstrated smaller immunological increases (P = 0.002) and higher risk of clinical deterioration (hazard ratio = 2.17; P = 0.008). Patients with baseline CD4 cell counts >200 cells per microliter had lower risk of clinical deterioration (hazard ratio = 0.373; P = 0.003). A total of 532 patients (48.1% of eligible) had CD4 counts available at baseline and 12 months post therapy for inclusion in immunolgic analyses. Patients infected with subtype B had larger increases in CD4 counts at 12 months (P = 0.024). A total of 530 patients (48.0% of eligible) were included in virological analyses with no differences in response found between genotypes.
CONCLUSIONS: Results suggest that patients infected with CRF01_AE have reduced immunologic response to therapy at 12 months, compared with subtype B-infected counterparts. Clinical deterioration was associated with low baseline CD4 counts and older age. The lack of differences in virologic outcomes suggests that all patients have opportunities for virological suppression.
METHODS: HIV+ patients from the Australian HIV Observational Database (AHOD) and the TREAT Asia HIV Observational Database (TAHOD) meeting specific criteria were included. In these analyses Asian and Caucasian status were defined by cohort. Factors associated with a low CD4:CD8 ratio (cutoff <0.2) prior to ART commencement, and with achieving a normal CD4:CD8 ratio (>1) at 12 and 24 months post ART commencement were assessed using logistic regression.
RESULTS: There were 591 patients from AHOD and 2,620 patients from TAHOD who met the inclusion criteria. TAHOD patients had a significantly (P<0.001) lower odds of having a baseline (prior to ART initiation) CD4:CD8 ratio greater than 0.2. After 12 months of ART, AHOD patients were more than twice as likely to achieve a normal CD4:CD8 ratio compared to TAHOD patients (15% versus 6%). However, after adjustment for confounding factors there was no significant difference between cohorts in the odds of achieving a CD4:CD8 ratio >1 (P=0.475).
CONCLUSIONS: We found a significantly lower CD4:CD8 ratio prior to commencing ART in TAHOD compared to AHOD even after adjusting for confounders. However, after adjustment, there was no significant difference between the cohorts in odds of achieving normal ratio. Baseline CD4+ and CD8+ counts seem to be the main driver for this difference between these two populations.
METHODS: Ninety-four HIV-infected patients were recruited to the study; a longitudinal cohort of patients recruited just prior to commencing cART and followed up for 48 weeks (n = 27), and a cross-sectional cohort (n = 67) consisting of patients with sIR (CD4 T-cell count < 350 cells/μL) and oIR (CD4 T-cell count > 500 cells/μL) after a minimum of 2 years on cART. Controls (n = 29) consisted of HIV-negative individuals. IFN-γ ELISPOT responses against HPV16 and HPV52 E6 were correlated to clinical characteristics, anal and oral HPV carriage, T-cell maturational subsets, markers of activation, senescence and T-regulatory cells.
RESULTS: HPV16 and HPV52 E6-specific T-cell responses were detected in only one of 27 patients (3.7%) during the initial phase of immune recovery. After at least 2 years of cART, those who achieved oIR had significantly higher E6-specific responses (9 of 34; 26.5%) compared with those with sIR (2 of 32; 6.3%) (P = 0.029). Apart from higher CD4 T-cell counts and lower CD4 T-cell activation, no other immunological correlates were associated with the detection of HPV16 and HPV52 E6-specific responses.
CONCLUSIONS: HPV16 and HPV52 E6-specific IFN-γ T-cell responses, a correlate of protective immunity, were detected more frequently among HIV-infected patients who achieved optimal immune recovery on cART (26.5%) compared with those with suboptimal recovery (6.3%).
METHODS: Blips were defined as detectable VL (≥ 50 copies/mL) preceded and followed by undetectable VL (<50 copies/mL). Virological failure (VF) was defined as two consecutive VL ≥50 copies/ml. Cox proportional hazard models of time to first VF after entry, were developed.
RESULTS: 5040 patients (AHOD n = 2597 and TAHOD n = 2521) were included; 910 (18%) of patients experienced blips. 744 (21%) and 166 (11%) of high- and middle/low-income participants, respectively, experienced blips ever. 711 (14%) experienced blips prior to virological failure. 559 (16%) and 152 (10%) of high- and middle/low-income participants, respectively, experienced blips prior to virological failure. VL testing occurred at a median frequency of 175 and 91 days in middle/low- and high-income sites, respectively. Longer time to VF occurred in middle/low income sites, compared with high-income sites (adjusted hazards ratio (AHR) 0.41; p<0.001), adjusted for year of first cART, Hepatitis C co-infection, cART regimen, and prior blips. Prior blips were not a significant predictor of VF in univariate analysis (AHR 0.97, p = 0.82). Differing magnitudes of blips were not significant in univariate analyses as predictors of virological failure (p = 0.360 for blip 50-≤1000, p = 0.309 for blip 50-≤400 and p = 0.300 for blip 50-≤200). 209 of 866 (24%) patients were switched to an alternate regimen in the setting of a blip.
CONCLUSION: Despite a lower proportion of blips occurring in low/middle-income settings, no significant difference was found between settings. Nonetheless, a substantial number of participants were switched to alternative regimens in the setting of blips.