METHODS: Adults > 18 years of age on second-line ART for ≥ 6 months were eligible. Cross-sectional data on HIV viral load (VL) and genotypic resistance testing were collected or testing was conducted between July 2015 and May 2017 at 12 Asia-Pacific sites. Virological failure (VF) was defined as VL > 1000 copies/mL with a second VL > 1000 copies/mL within 3-6 months. FASTA files were submitted to Stanford University HIV Drug Resistance Database and RAMs were compared against the IAS-USA 2019 mutations list. VF risk factors were analysed using logistic regression.
RESULTS: Of 1378 patients, 74% were male and 70% acquired HIV through heterosexual exposure. At second-line switch, median [interquartile range (IQR)] age was 37 (32-42) years and median (IQR) CD4 count was 103 (43.5-229.5) cells/µL; 93% received regimens with boosted protease inhibitors (PIs). Median duration on second line was 3 years. Among 101 patients (7%) with VF, CD4 count > 200 cells/µL at switch [odds ratio (OR) = 0.36, 95% confidence interval (CI): 0.17-0.77 vs. CD4 ≤ 50) and HIV exposure through male-male sex (OR = 0.32, 95% CI: 0.17-0.64 vs. heterosexual) or injecting drug use (OR = 0.24, 95% CI: 0.12-0.49) were associated with reduced VF. Of 41 (41%) patients with resistance data, 80% had at least one RAM to nonnucleoside reverse transcriptase inhibitors (NNRTIs), 63% to NRTIs, and 35% to PIs. Of those with PI RAMs, 71% had two or more.
CONCLUSIONS: There were low proportions with VF and significant RAMs in our cohort, reflecting the durability of current second-line regimens.
METHODS: Treatment modification was defined as a change of two antiretrovirals, a drug class change or treatment interruption (TI), all for >14 days. We assessed factors associated with CD4 changes and undetectable viral load (UVL <1,000 copies/ml) at 1 year after second-line failure using linear and logistic regression, respectively. Survival time was analysed using competing risk regression.
RESULTS: Of the 328 patients who failed second-line ART in our cohorts, 208 (63%) had a subsequent treatment modification. Compared with those who continued the failing regimen, the average CD4 cell increase was higher in patients who had a modification without TI (difference =77.5, 95% CI 35.3, 119.7) while no difference was observed among those with TI (difference =-5.3, 95% CI -67.3, 56.8). Compared with those who continued the failing regimen, the odds of achieving UVL was lower in patients with TI (OR=0.18, 95% CI 0.06, 0.60) and similar among those who had a modification without TI (OR=1.97, 95% CI 0.95, 4.10), with proportions of UVL 60%, 22% and 75%, respectively. Survival time was not affected by treatment modifications.
CONCLUSIONS: CD4 cell improvements were observed in those who had treatment modification without TI compared with those on the failing regimen. When no other options are available, maintaining the same failing ART combination provided better VL control than interrupting treatment.
METHODS: Treatment modification was defined as a change of two antiretrovirals, a drug class change or treatment interruption (TI), all for >14 days. We assessed factors associated with CD4 changes and undetectable viral load (UVL <1,000 copies/ml) at 1 year after second-line failure using linear and logistic regression, respectively. Survival time was analysed using competing risk regression.
RESULTS: Of the 328 patients who failed second-line ART in our cohorts, 208 (63%) had a subsequent treatment modification. Compared with those who continued the failing regimen, the average CD4 cell increase was higher in patients who had a modification without TI (difference =77.5, 95% CI 35.3, 119.7) while no difference was observed among those with TI (difference =-5.3, 95% CI -67.3, 56.8). Compared with those who continued the failing regimen, the odds of achieving UVL was lower in patients with TI (OR=0.18, 95% CI 0.06, 0.60) and similar among those who had a modification without TI (OR=1.97, 95% CI 0.95, 4.10), with proportions of UVL 60%, 22% and 75%, respectively. Survival time was not affected by treatment modifications.
CONCLUSIONS: CD4 cell improvements were observed in those who had treatment modification without TI compared with those on the failing regimen. When no other options are available, maintaining the same failing ART combination provided better VL control than interrupting treatment.
METHODS: The study population consisted of HIV-infected patients enrolled in the TREAT Asia HIV Observational Database (TAHOD). Individuals were included in this analysis if they started combination antiretroviral treatment (cART) after 2002, were being treated at a centre that documented a median rate of viral load monitoring ≥0.8 tests/patient/year among TAHOD enrolees, and experienced a minor or major treatment substitution while on virally suppressive cART. The primary endpoint to evaluate outcomes was clinical or virological failure (VF), followed by an ART class change. Clinical failure was defined as death or an AIDS diagnosis. VF was defined as confirmed viral load measurements ≥400 copies/mL followed by an ART class change within six months. Minor regimen substitutions were defined as within-class changes and major regimen substitutions were defined as changes to a drug class. The patterns of substitutions and rate of clinical or VF after substitutions were analyzed.
RESULTS: Of 3994 adults who started ART after 2002, 3119 (78.1%) had at least one period of virological suppression. Among these, 1170 (37.5%) underwent a minor regimen substitution, and 296 (9.5%) underwent a major regimen substitution during suppression. The rates of clinical or VF were 1.48/100 person years (95% CI 1.14 to 1.91) in the minor substitution group, 2.85/100 person years (95% CI 1.88 to 4.33) in the major substitution group and 2.53/100 person years (95% CI 2.20 to 2.92) among patients that did not undergo a treatment substitution.
CONCLUSIONS: The rate of clinical or VF was low in both major and minor substitution groups, showing that regimen substitution is generally effective in non-clinical trial settings in Asian countries.
METHODS: Blips were defined as detectable VL (≥ 50 copies/mL) preceded and followed by undetectable VL (<50 copies/mL). Virological failure (VF) was defined as two consecutive VL ≥50 copies/ml. Cox proportional hazard models of time to first VF after entry, were developed.
RESULTS: 5040 patients (AHOD n = 2597 and TAHOD n = 2521) were included; 910 (18%) of patients experienced blips. 744 (21%) and 166 (11%) of high- and middle/low-income participants, respectively, experienced blips ever. 711 (14%) experienced blips prior to virological failure. 559 (16%) and 152 (10%) of high- and middle/low-income participants, respectively, experienced blips prior to virological failure. VL testing occurred at a median frequency of 175 and 91 days in middle/low- and high-income sites, respectively. Longer time to VF occurred in middle/low income sites, compared with high-income sites (adjusted hazards ratio (AHR) 0.41; p<0.001), adjusted for year of first cART, Hepatitis C co-infection, cART regimen, and prior blips. Prior blips were not a significant predictor of VF in univariate analysis (AHR 0.97, p = 0.82). Differing magnitudes of blips were not significant in univariate analyses as predictors of virological failure (p = 0.360 for blip 50-≤1000, p = 0.309 for blip 50-≤400 and p = 0.300 for blip 50-≤200). 209 of 866 (24%) patients were switched to an alternate regimen in the setting of a blip.
CONCLUSION: Despite a lower proportion of blips occurring in low/middle-income settings, no significant difference was found between settings. Nonetheless, a substantial number of participants were switched to alternative regimens in the setting of blips.
OBJECTIVE: To assess the socio-economic determinants of TB in HIV-infected patients in Asia.
DESIGN: This was a matched case-control study. HIV-positive, TB-positive cases were matched to HIV-positive, TB-negative controls according to age, sex and CD4 cell count. A socio-economic questionnaire comprising 23 questions, including education level, employment, housing and substance use, was distributed. Socio-economic risk factors for TB were analysed using conditional logistic regression analysis.
RESULTS: A total of 340 patients (170 matched pairs) were recruited, with 262 (77.1%) matched for all three criteria. Pulmonary TB was the predominant type (n = 115, 67.6%). The main risk factor for TB was not having a university level education (OR 4.45, 95%CI 1.50-13.17, P = 0.007). Burning wood or coal regularly inside the house and living in the same place of origin were weakly associated with TB diagnosis.
CONCLUSIONS: These data suggest that lower socio-economic status is associated with an increased risk of TB in Asia. Integrating clinical and socio-economic factors into HIV treatment may help in the prevention of opportunistic infections and disease progression.
METHODS: Logistic regression analysis was used to distinguish associated current smoking characteristics. Five-year predictive risks of CVD, CHD and MI and the impact of simulated interventions were calculated utilizing the Data Collection on Adverse Effects of Anti-HIV Drugs Study (D:A:D) algorithm.
RESULTS: Smoking status data were collected from 4274 participants and 1496 of these had sufficient data for simulated intervention calculations. Current smoking prevalence in these two groups was similar (23.2% vs. 19.9%, respectively). Characteristics associated with current smoking included age > 50 years compared with 30-39 years [odds ratio (OR) 0.65; 95% confidence interval (CI) 0.51-0.83], HIV exposure through injecting drug use compared with heterosexual exposure (OR 3.03; 95% CI 2.25-4.07), and receiving antiretroviral therapy (ART) at study sites in Singapore, South Korea, Malaysia, Japan and Vietnam in comparison to Thailand (all OR > 2). Women were less likely to smoke than men (OR 0.11; 95% CI 0.08-0.14). In simulated interventions, smoking cessation demonstrated the greatest impact in reducing CVD and CHD risk and closely approximated the impact of switching from abacavir to an alternate antiretroviral in the reduction of 5-year MI risk.
CONCLUSIONS: Multiple interventions could reduce CVD, CHD and MI risk in Asian HIV-positive patients, with smoking cessation potentially being the most influential.
METHODS: Long-term LTFU was defined as LTFU occurring after 5 years on ART. LTFU was defined as (1) patients not seen in the previous 12 months; and (2) patients not seen in the previous 6 months. Factors associated with LTFU were analysed using competing risk regression.
RESULTS: Under the 12-month definition, the LTFU rate was 2.0 per 100 person-years (PY) [95% confidence interval (CI) 1.8-2.2 among 4889 patients included in the study. LTFU was associated with age > 50 years [sub-hazard ratio (SHR) 1.64; 95% CI 1.17-2.31] compared with 31-40 years, viral load ≥ 1000 copies/mL (SHR 1.86; 95% CI 1.16-2.97) compared with viral load < 1000 copies/mL, and hepatitis C coinfection (SHR 1.48; 95% CI 1.06-2.05). LTFU was less likely to occur in females, in individuals with higher CD4 counts, in those with self-reported adherence ≥ 95%, and in those living in high-income countries. The 6-month LTFU definition produced an incidence rate of 3.2 per 100 PY (95% CI 2.9-3.4 and had similar associations but with greater risks of LTFU for ART initiation in later years (2006-2009: SHR 2.38; 95% CI 1.93-2.94; and 2010-2011: SHR 4.26; 95% CI 3.17-5.73) compared with 2003-2005.
CONCLUSIONS: The long-term LTFU rate in our cohort was low, with older age being associated with LTFU. The increased risk of LTFU with later years of ART initiation in the 6-month analysis, but not the 12-month analysis, implies that there was a possible move towards longer HIV clinic scheduling in Asia.
METHODS: In a regional HIV observational cohort in the Asia-Pacific region, patients with viral suppression (2 consecutive viral loads <400 copies/mL) and a CD4 count ≥200 cells per microliter who had CD4 testing 6 monthly were analyzed. Main study end points were occurrence of 1 CD4 count <200 cells per microliter (single CD4 <200) and 2 CD4 counts <200 cells per microliter within a 6-month period (confirmed CD4 <200). A comparison of time with single and confirmed CD4 <200 with biannual or annual CD4 assessment was performed by generating a hypothetical group comprising the same patients with annual CD4 testing by removing every second CD4 count.
RESULTS: Among 1538 patients, the rate of single CD4 <200 was 3.45/100 patient-years and of confirmed CD4 <200 was 0.77/100 patient-years. During 5 years of viral suppression, patients with baseline CD4 200-249 cells per microliter were significantly more likely to experience confirmed CD4 <200 compared with patients with higher baseline CD4 [hazard ratio, 55.47 (95% confidence interval: 7.36 to 418.20), P < 0.001 versus baseline CD4 ≥500 cells/μL]. Cumulative probabilities of confirmed CD4 <200 was also higher in patients with baseline CD4 200-249 cells per microliter compared with patients with higher baseline CD4. There was no significant difference in time to confirmed CD4 <200 between biannual and annual CD4 measurement (P = 0.336).
CONCLUSIONS: Annual CD4 monitoring in virally suppressed HIV patients with a baseline CD4 ≥250 cells per microliter may be sufficient for clinical management.
METHODS: This cross-sectional study recruited adult PWH during routine follow-up at five HIV clinical sites in the Asia-Pacific region. Participants were screened for depression using Patient Health Questionnaire-9 and SU using Alcohol, Smoking, and Substance Involvement Screening Test (ASSIST). Quality of life (QoL) was assessed with WHOQOL-HIV BREF and functional ability with World Health Organization Disability Assessment Schedule 2.0 (WHODAS 2.0). Factors associated with mean QoL and disability scores were analysed using linear regression.
RESULTS: Of 864 PWH enrolled, 753 screened positive for depression or SU. The median (interquartile range, IQR) age was 38 (31-47) years and 97% were on ART. Overall mean WHOQOL-HIV BREF and WHODAS scores indicated greater impairment with increasing depressive symptom severity and SU risk. In multivariate analysis, PWH reporting previous trauma/stress (difference = 2.7, 95% confidence interval [CI] 1.5-3.9, P
METHODS: HIV-infected adults enrolled in the TREAT Asia HIV Observational Database were eligible if they had an HIV RNA measurement documented at the time of ART initiation. The dataset was randomly split into a derivation data set (75% of patients) and a validation data set (25%). Factors associated with pre-treatment HIV RNA <100,000 copies/mL were evaluated by logistic regression adjusted for study site. A prediction model and prediction scores were created.
RESULTS: A total of 2592 patients were enrolled for the analysis. Median [interquartile range (IQR)] age was 35.8 (29.9-42.5) years; CD4 count was 147 (50-248) cells/mm3; and pre-treatment HIV RNA was 100,000 (34,045-301,075) copies/mL. Factors associated with pre-treatment HIV RNA <100,000 copies/mL were age <30 years [OR 1.40 vs. 41-50 years; 95% confidence interval (CI) 1.10-1.80, p = 0.01], body mass index >30 kg/m2(OR 2.4 vs. <18.5 kg/m2; 95% CI 1.1-5.1, p = 0.02), anemia (OR 1.70; 95% CI 1.40-2.10, p 350 cells/mm3(OR 3.9 vs. <100 cells/mm3; 95% CI 2.0-4.1, p 2000 cells/mm3(OR 1.7 vs. <1000 cells/mm3; 95% CI 1.3-2.3, p 25 yielded the sensitivity of 46.7%, specificity of 79.1%, positive predictive value of 67.7%, and negative predictive value of 61.2% for prediction of pre-treatment HIV RNA <100,000 copies/mL among derivation patients.
CONCLUSION: A model prediction for pre-treatment HIV RNA <100,000 copies/mL produced an area under the ROC curve of 0.70. A larger sample size for prediction model development as well as for model validation is warranted.