METHODS: Patients initiating cART between 2006 and 2013 were included. TI was defined as stopping cART for >1 day. Treatment failure was defined as confirmed virological, immunological or clinical failure. Time to treatment failure during cART was analysed using Cox regression, not including periods off treatment. Covariables with P < 0.10 in univariable analyses were included in multivariable analyses, where P < 0.05 was considered statistically significant.
RESULTS: Of 4549 patients from 13 countries in Asia, 3176 (69.8%) were male and the median age was 34 years. A total of 111 (2.4%) had TIs due to AEs and 135 (3.0%) had TIs for other reasons. Median interruption times were 22 days for AE and 148 days for non-AE TIs. In multivariable analyses, interruptions >30 days were associated with failure (31-180 days HR = 2.66, 95%CI (1.70-4.16); 181-365 days HR = 6.22, 95%CI (3.26-11.86); and >365 days HR = 9.10, 95% CI (4.27-19.38), all P < 0.001, compared to 0-14 days). Reasons for previous TI were not statistically significant (P = 0.158).
CONCLUSIONS: Duration of interruptions of more than 30 days was the key factor associated with large increases in subsequent risk of treatment failure. If TI is unavoidable, its duration should be minimised to reduce the risk of failure after treatment resumption.
OBJECTIVE: To assess the socio-economic determinants of TB in HIV-infected patients in Asia.
DESIGN: This was a matched case-control study. HIV-positive, TB-positive cases were matched to HIV-positive, TB-negative controls according to age, sex and CD4 cell count. A socio-economic questionnaire comprising 23 questions, including education level, employment, housing and substance use, was distributed. Socio-economic risk factors for TB were analysed using conditional logistic regression analysis.
RESULTS: A total of 340 patients (170 matched pairs) were recruited, with 262 (77.1%) matched for all three criteria. Pulmonary TB was the predominant type (n = 115, 67.6%). The main risk factor for TB was not having a university level education (OR 4.45, 95%CI 1.50-13.17, P = 0.007). Burning wood or coal regularly inside the house and living in the same place of origin were weakly associated with TB diagnosis.
CONCLUSIONS: These data suggest that lower socio-economic status is associated with an increased risk of TB in Asia. Integrating clinical and socio-economic factors into HIV treatment may help in the prevention of opportunistic infections and disease progression.
METHODS: Blips were defined as detectable VL (≥ 50 copies/mL) preceded and followed by undetectable VL (<50 copies/mL). Virological failure (VF) was defined as two consecutive VL ≥50 copies/ml. Cox proportional hazard models of time to first VF after entry, were developed.
RESULTS: 5040 patients (AHOD n = 2597 and TAHOD n = 2521) were included; 910 (18%) of patients experienced blips. 744 (21%) and 166 (11%) of high- and middle/low-income participants, respectively, experienced blips ever. 711 (14%) experienced blips prior to virological failure. 559 (16%) and 152 (10%) of high- and middle/low-income participants, respectively, experienced blips prior to virological failure. VL testing occurred at a median frequency of 175 and 91 days in middle/low- and high-income sites, respectively. Longer time to VF occurred in middle/low income sites, compared with high-income sites (adjusted hazards ratio (AHR) 0.41; p<0.001), adjusted for year of first cART, Hepatitis C co-infection, cART regimen, and prior blips. Prior blips were not a significant predictor of VF in univariate analysis (AHR 0.97, p = 0.82). Differing magnitudes of blips were not significant in univariate analyses as predictors of virological failure (p = 0.360 for blip 50-≤1000, p = 0.309 for blip 50-≤400 and p = 0.300 for blip 50-≤200). 209 of 866 (24%) patients were switched to an alternate regimen in the setting of a blip.
CONCLUSION: Despite a lower proportion of blips occurring in low/middle-income settings, no significant difference was found between settings. Nonetheless, a substantial number of participants were switched to alternative regimens in the setting of blips.
Methods: Sixteen LMIC sites included in the International Epidemiology Databases to Evaluate AIDS - Asia-Pacific network were surveyed.
Results: Sites were mostly (81%) based in urban public referral hospitals. Half had protocols to assess tobacco and alcohol use. Protocols for assessing physical inactivity and obesity were in place at 31% and 38% of sites, respectively. Most sites provided educational material on ASCVD risk factors (between 56% and 75% depending on risk factors). A total of 94% reported performing routine screening for hypertension, 100% for hyperlipidaemia and 88% for diabetes. Routine ASCVD risk assessment was reported by 94% of sites. Protocols for the management of hypertension, hyperlipidaemia, diabetes, high ASCVD risk and chronic ischaemic stroke were in place at 50%, 69%, 56%, 19% and 38% of sites, respectively. Blood pressure monitoring was free for patients at 69% of sites; however, most required patients to pay some or all the costs for other ASCVD-related procedures. Medications available in the clinic or within the same facility included angiotensin-converting enzyme inhibitors (81%), statins (94%) and sulphonylureas (94%).
Conclusion: The consistent availability of clinical screening, diagnostic testing and procedures and the availability of ASCVD medications in the Asian LMIC clinics surveyed are strengths that should be leveraged to improve the implementation of cardiovascular care protocols.
METHODS: The study population consisted of HIV-infected patients enrolled in the TREAT Asia HIV Observational Database (TAHOD). Individuals were included in this analysis if they started combination antiretroviral treatment (cART) after 2002, were being treated at a centre that documented a median rate of viral load monitoring ≥0.8 tests/patient/year among TAHOD enrolees, and experienced a minor or major treatment substitution while on virally suppressive cART. The primary endpoint to evaluate outcomes was clinical or virological failure (VF), followed by an ART class change. Clinical failure was defined as death or an AIDS diagnosis. VF was defined as confirmed viral load measurements ≥400 copies/mL followed by an ART class change within six months. Minor regimen substitutions were defined as within-class changes and major regimen substitutions were defined as changes to a drug class. The patterns of substitutions and rate of clinical or VF after substitutions were analyzed.
RESULTS: Of 3994 adults who started ART after 2002, 3119 (78.1%) had at least one period of virological suppression. Among these, 1170 (37.5%) underwent a minor regimen substitution, and 296 (9.5%) underwent a major regimen substitution during suppression. The rates of clinical or VF were 1.48/100 person years (95% CI 1.14 to 1.91) in the minor substitution group, 2.85/100 person years (95% CI 1.88 to 4.33) in the major substitution group and 2.53/100 person years (95% CI 2.20 to 2.92) among patients that did not undergo a treatment substitution.
CONCLUSIONS: The rate of clinical or VF was low in both major and minor substitution groups, showing that regimen substitution is generally effective in non-clinical trial settings in Asian countries.
METHODS: Prospectively collected longitudinal data from patients in Thailand, Hong Kong, Malaysia, Japan, Taiwan, and South Korea were provided for analysis. Covariates included demographics, hepatitis B and C coinfections, baseline CD4 T lymphocyte count, and plasma HIV-1 RNA levels. Clinical deterioration (a new diagnosis of Centers for Disease Control and Prevention category B/AIDS-defining illness or death) was assessed by proportional hazards models. Surrogate endpoints were 12-month change in CD4 cell count and virologic suppression post therapy, evaluated by linear and logistic regression, respectively.
RESULTS: Of 1105 patients, 1036 (93.8%) infected with CRF01_AE or subtype B were eligible for inclusion in clinical deterioration analyses and contributed 1546.7 person-years of follow-up (median: 413 days, interquartile range: 169-672 days). Patients >40 years demonstrated smaller immunological increases (P = 0.002) and higher risk of clinical deterioration (hazard ratio = 2.17; P = 0.008). Patients with baseline CD4 cell counts >200 cells per microliter had lower risk of clinical deterioration (hazard ratio = 0.373; P = 0.003). A total of 532 patients (48.1% of eligible) had CD4 counts available at baseline and 12 months post therapy for inclusion in immunolgic analyses. Patients infected with subtype B had larger increases in CD4 counts at 12 months (P = 0.024). A total of 530 patients (48.0% of eligible) were included in virological analyses with no differences in response found between genotypes.
CONCLUSIONS: Results suggest that patients infected with CRF01_AE have reduced immunologic response to therapy at 12 months, compared with subtype B-infected counterparts. Clinical deterioration was associated with low baseline CD4 counts and older age. The lack of differences in virologic outcomes suggests that all patients have opportunities for virological suppression.
METHODS: In a regional HIV observational cohort in the Asia-Pacific region, patients with viral suppression (2 consecutive viral loads <400 copies/mL) and a CD4 count ≥200 cells per microliter who had CD4 testing 6 monthly were analyzed. Main study end points were occurrence of 1 CD4 count <200 cells per microliter (single CD4 <200) and 2 CD4 counts <200 cells per microliter within a 6-month period (confirmed CD4 <200). A comparison of time with single and confirmed CD4 <200 with biannual or annual CD4 assessment was performed by generating a hypothetical group comprising the same patients with annual CD4 testing by removing every second CD4 count.
RESULTS: Among 1538 patients, the rate of single CD4 <200 was 3.45/100 patient-years and of confirmed CD4 <200 was 0.77/100 patient-years. During 5 years of viral suppression, patients with baseline CD4 200-249 cells per microliter were significantly more likely to experience confirmed CD4 <200 compared with patients with higher baseline CD4 [hazard ratio, 55.47 (95% confidence interval: 7.36 to 418.20), P < 0.001 versus baseline CD4 ≥500 cells/μL]. Cumulative probabilities of confirmed CD4 <200 was also higher in patients with baseline CD4 200-249 cells per microliter compared with patients with higher baseline CD4. There was no significant difference in time to confirmed CD4 <200 between biannual and annual CD4 measurement (P = 0.336).
CONCLUSIONS: Annual CD4 monitoring in virally suppressed HIV patients with a baseline CD4 ≥250 cells per microliter may be sufficient for clinical management.
METHODS: Data linkages with the national death registry or national HIV database were conducted in 2020 on all PLHIV who met LTFU criteria while enrolled in care at participating HIV clinical sites. LTFU was defined as having no documented clinical contact in the previous year, excluding transfers and deaths. Survival time was analyzed using the Cox regression, stratified by site.
RESULTS: Data linkages were performed for 489 PLHIV who had been LTFU at sites in Malaysia (n = 2) and Thailand (n = 4). There were 151 (31%) deaths after being LTFU; the mortality rate was 4.89 per 100 person-years. Risk factors for mortality after being LTFU were older age [41-50 years: hazard ratio (HR) = 1.99, 95% confidence interval (CI): 1.08 to 3.68; and older than 50 years: HR = 4.93, 95% CI: 2.63 to 9.22; vs. age 30 years or younger]; receiving NRTI + PI (HR = 1.87, 95% CI: 1.22 to 2.85 vs. NRTI + NNRTI); positive hepatitis C antibody (HR = 2.25, 95% CI: 1.40 to 3.62); and having previous AIDS illness (HR = 1.45, 95% CI: 1.03 to 2.05). An improved survival was seen with a higher CD4 count (CD4 351-500 cells/µL: HR = 0.40, 95%CI: 0.21-0.76; and CD4 >500 cells/µL: HR = 0.43, 95%CI: 0.25-0.75; vs. CD4 ≤200 cells/µL).
CONCLUSIONS: Almost one-third of PLHIV who were LTFU in this cohort had died while out of care, emphasizing the importance of efforts to reengage PLHIV after they have been LTFU and ensure they have access to ongoing ART.
METHODS: We did a cohort analysis of TB cases in SECOND-LINE. TB cases included any clinical or laboratory-confirmed diagnoses and/or commencement of treatment for TB after randomization. Baseline factors associated with TB were analyzed using Cox regression stratified by site.
RESULTS: TB cases occurred at sites in Argentina, India, Malaysia, Nigeria, South Africa, and Thailand, in a cohort of 355 of the 541 SECOND-LINE participants. Overall, 20 cases of TB occurred, an incidence rate of 3.4 per 100 person-years (95% CI: 2.1 to 5.1). Increased TB risk was associated with a low CD4+-cell count (≤200 cells/μL), high viral load (>200 copies/mL), low platelet count (<150 ×109/L), and low total serum cholesterol (≤4.5 mmol/L) at baseline. An increased risk of death was associated with TB, adjusted for CD4, platelets, and cholesterol. A low CD4+-cell count was significantly associated with incident TB, mortality, other AIDS diagnoses, and virologic failure.
DISCUSSION: The risk of TB remains elevated in PLHIV in the setting of second-line HIV therapy in TB endemic regions. TB was associated with a greater risk of death. Finding that low CD4+ T-cell count was significantly associated with poor outcomes in this population supports the value of CD4+ monitoring in HIV clinical management.
METHODS: HIV-positive patients enrolled in the TREAT Asia HIV Observational Database who had used second-line ART for ≥6 months were included. ART use and rates and predictors of second-line treatment failure were evaluated.
RESULTS: There were 302 eligible patients. Most were male (76.5%) and exposed to HIV via heterosexual contact (71.5%). Median age at second-line initiation was 39.2 years, median CD4 cell count was 146 cells per cubic millimeter, and median HIV viral load was 16,224 copies per milliliter. Patients started second-line ART before 2007 (n = 105), 2007-2010 (n = 147) and after 2010 (n = 50). Ritonavir-boosted lopinavir and atazanavir accounted for the majority of protease inhibitor use after 2006. Median follow-up time on second-line therapy was 2.3 years. The rates of treatment failure and mortality per 100 patient/years were 8.8 (95% confidence interval: 7.1 to 10.9) and 1.1 (95% confidence interval: 0.6 to 1.9), respectively. Older age, high baseline viral load, and use of a protease inhibitor other than lopinavir or atazanavir were associated with a significantly shorter time to second-line failure.
CONCLUSIONS: Increased access to viral load monitoring to facilitate early detection of first-line ART failure and subsequent treatment switch is important for maximizing the durability of second-line therapy in Asia. Although second-line ART is highly effective in the region, the reported rate of failure emphasizes the need for third-line ART in a small portion of patients.