METHODS: We investigated serum creatinine (S-Cr) monitoring rates before and during ART and the incidence and prevalence of renal dysfunction after starting TDF by using data from a regional cohort of HIV-infected individuals in the Asia-Pacific. Time to renal dysfunction was defined as time from TDF initiation to the decline in estimated glomerular filtration rate (eGFR) to <60 ml/min/1.73m2 with >30% reduction from baseline using the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation or the decision to stop TDF for reported TDF-nephrotoxicity. Predictors of S-Cr monitoring rates were assessed by Poisson regression and risk factors for developing renal dysfunction were assessed by Cox regression.
RESULTS: Among 2,425 patients who received TDF, S-Cr monitoring rates increased from 1.01 to 1.84 per person per year after starting TDF (incidence rate ratio 1.68, 95%CI 1.62-1.74, p <0.001). Renal dysfunction on TDF occurred in 103 patients over 5,368 person-years of TDF use (4.2%; incidence 1.75 per 100 person-years). Risk factors for developing renal dysfunction included older age (>50 vs. ≤30, hazard ratio [HR] 5.39, 95%CI 2.52-11.50, p <0.001; and using PI-based regimen (HR 1.93, 95%CI 1.22-3.07, p = 0.005). Having an eGFR prior to TDF (pre-TDF eGFR) of ≥60 ml/min/1.73m2 showed a protective effect (HR 0.38, 95%CI, 0.17-0.85, p = 0.018).
CONCLUSIONS: Renal dysfunction on commencing TDF use was not common, however, older age, lower baseline eGFR and PI-based ART were associated with higher risk of renal dysfunction during TDF use in adult HIV-infected individuals in the Asia-Pacific region.
METHODS: To create a retrospective cohort of all adults with HIV released from jails and prisons in Connecticut, USA (2007-14), we linked administrative custody and pharmacy databases with mandatory HIV/AIDS surveillance monitoring and case management data. We examined time to LTC (defined as first viral load measurement after release) and viral suppression at LTC. We used generalised estimating equations to show predictors of LTC within 14 days and 30 days of release.
FINDINGS: Among 3302 incarceration periods for 1350 individuals between 2007 and 2014, 672 (21%) of 3181 periods had LTC within 14 days of release, 1042 (34%) of 3064 had LTC within 30 days of release, and 301 (29%) of 1042 had detectable viral loads at LTC. Factors positively associated with LTC within 14 days of release are intermediate (31-364 days) incarceration duration (adjusted odds ratio 1·52; 95% CI 1·19-1·95), and transitional case management (1·65; 1·36-1·99), receipt of antiretroviral therapy during incarceration (1·39; 1·11-1·74), and two or more medical comorbidities (1·86; 1·48-2·36). Reincarceration (0·70; 0·56-0·88) and conditional release (0·62; 0·50-0·78) were negatively associated with LTC within 14 days. Hispanic ethnicity, bonded release, and psychiatric comorbidity were also associated with LTC within 30 days but reincarceration was not.
INTERPRETATION: LTC after release is suboptimal but improves when inmates' medical, psychiatric, and case management needs are identified and addressed before release. People who are rapidly cycling through jail facilities are particularly vulnerable to missed linkage opportunities. The use of integrated programmes to align justice and health-care goals has great potential to improve long-term HIV treatment outcomes.
FUNDING: US National Institutes of Health.
METHODS: Nevirapine population pharmacokinetics was modelled with Pmetrics. A total of 708 observations from 112 patients were included in the model building and validation analysis. Evaluation of the model was based on a visual inspection of observed versus predicted (population and individual) concentrations and plots weighted residual error versus concentrations. Accuracy and robustness of the model were evaluated by visual predictive check (VPC). The median parameters' estimates obtained from the final model were used to predict individual nevirapine plasma area-under-curve (AUC) in the validation dataset. The Bland-Altman plot was used to compare the AUC predicted with trapezoidal AUC.
RESULTS: The median nevirapine clearance was of 2.92 L/h, the median rate of absorption was 2.55/h and the volume of distribution was 78.23 L. Nevirapine pharmacokinetics were best described by one-compartmental with first-order absorption model and a lag-time. Weighted residuals for the model selected were homogenously distributed over the concentration and time range. The developed model adequately estimated AUC.
CONCLUSIONS: In conclusion, a model to describe the pharmacokinetics of nevirapine was developed. The developed model adequately describes nevirapine population pharmacokinetics in HIV-infected patients in Malaysia.
DESIGN: We analyzed data from a community-recruited prospective cohort in Vancouver, Canada (n = 623), from 2014 to 2017.
METHODS: We used multivariable generalized mixed-effects analyses to estimate longitudinal factors associated with mean material security score. We then estimated the association between achieving at least 95% adherence to ART and overall mean material score, as well as mean score for three factors derived from a factor analysis. The three-factor structure, employed in the current analyses, were factor 1 (basic needs); factor 2 (housing-related variables) and factor 3 (economic resources).
RESULTS: Recent incarceration [β-coefficient (β) = -0.176, 95% confidence interval (95% CI): -0.288 to -0.063], unmet health needs [β = -0.110, 95% CI: -0.178 to -0.042), unmet social service needs (β = -0.264, 95% CI: -0.336 to -0.193) and having access to social services (β= -0.102, 95% CI: -0.1586 to -0.0465) were among the factors associated with lower material security scores. Contrary to expectations that low levels of material security in this population would lead to poor ART adherence, we did not observe a significant relationship between adherence and overall material security score, or for each factor individually.
CONCLUSION: Our findings highlight the potentially important role of no-cost, universal access to HIV prevention and treatment, in mitigating the impact of socioeconomic disadvantage on ART adherence.
Methods: Study end points were as follows: (1) a CD4 count <200 cells/mm3 followed by a CD4 count ≥200 cells/mm3 (transient CD4 <200); (2) CD4 count <200 cells/mm3 confirmed within 6 months (confirmed CD4 <200); and (3) a new or recurrent World Health Organization (WHO) stage 3 or 4 illness (clinical failure). Kaplan-Meier curves and Cox regression were used to evaluate rates and predictors of transient CD4 <200, confirmed CD4 <200, and clinical failure among virally suppressed children aged 5-15 years who were enrolled in the TREAT Asia Pediatric HIV Observational Database.
Results: Data from 967 children were included in the analysis. At the time of confirmed viral suppression, median age was 10.2 years, 50.4% of children were female, and 95.4% were perinatally infected with HIV. Median CD4 cell count was 837 cells/mm3, and 54.8% of children were classified as having WHO stage 3 or 4 disease. In total, 18 transient CD4 <200 events, 2 confirmed CD4 <200 events, and10 clinical failures occurred at rates of 0.73 (95% confidence interval [95% CI], 0.46-1.16), 0.08 (95% CI, 0.02-0.32), and 0.40 (95% CI, 0.22-0.75) events per 100 patient-years, respectively. CD4 <500 cells/mm3 at the time of viral suppression confirmation was associated with higher rates of both CD4 outcomes.
Conclusions: Regular CD4 testing may be unnecessary for virally suppressed children aged 5-15 years with CD4 ≥500 cells/mm3.
METHODS: We adapted a dynamic model of HIV transmission among MSM/TW in Lima to incorporate stimulant use and increased HIV risk, suicide and CVD mortality. Among 6% to 24% of MSM/TW using stimulants (mostly cocaine), we modelled an increased risk of unprotected anal sex (RR = 1.35 [95%CI: 1.17 to 1.57]) obtained from local data, and increased risk of suicide (SMR = 6.26 [95%CI: 2.84 to 13.80]) and CVD (SMR = 1.83 [95%CI: 0.39 to 8.57]) mortality associated with cocaine use based on a global systematic review. We estimated the proportion of health harms occurring among MSM/TW who use stimulants in the next year (01-2020/01-2021). We also investigated the 10-year impact (01-2020/01-2030) of: (1) PrEP prioritization for stimulant-using MSM/TW compared to random allocation, and (2) integrating PrEP with a theoretical intervention halving stimulant-associated risk.
RESULTS: MSM/TW in Lima will experience high HIV incidence, suicide mortality and CVD mortality (1.6/100 py, and 0.018/100 py, 0.13/100 py respectively) in 2020. Despite stimulant using MSM/TW comprising an estimated 9.5% (95%CI: 7.8 to 11.5) of all MSM/TW, in the next year, 11% 95%CI (i.e. 2.5% to 97.5% percentile) 10% to 13%) of new HIV infections, 39% (95%CI: 18% to 60%) of suicides and 15% (95%CI: 3% to 44%) of CVD deaths could occur among this group. Scaling up PrEP among all stimulant using MSM/TW could prevent 19% (95%CI: 11% to 31%) more HIV infections over 10 years compared to random allocation. Integrating PrEP and an intervention to halve stimulant-associated risks could reduce new HIV infections by 20% (95%CI: 10% to 37%), suicide deaths by 14% (95%CI: 5% to 27%) and CVD deaths by 3% (95%CI: 0% to 16%) over a decade.
CONCLUSIONS: MSM/TW who use stimulants experience a disproportionate burden of health harms. Prioritizing PrEP based on stimulant use, in addition to sexual behaviour/gender identity criteria, could increase its impact. Integrated substance use, harm reduction, mental health and HIV care among MSM/TW is needed.
METHODS: In a regional HIV observational cohort in the Asia-Pacific region, patients with viral suppression (2 consecutive viral loads <400 copies/mL) and a CD4 count ≥200 cells per microliter who had CD4 testing 6 monthly were analyzed. Main study end points were occurrence of 1 CD4 count <200 cells per microliter (single CD4 <200) and 2 CD4 counts <200 cells per microliter within a 6-month period (confirmed CD4 <200). A comparison of time with single and confirmed CD4 <200 with biannual or annual CD4 assessment was performed by generating a hypothetical group comprising the same patients with annual CD4 testing by removing every second CD4 count.
RESULTS: Among 1538 patients, the rate of single CD4 <200 was 3.45/100 patient-years and of confirmed CD4 <200 was 0.77/100 patient-years. During 5 years of viral suppression, patients with baseline CD4 200-249 cells per microliter were significantly more likely to experience confirmed CD4 <200 compared with patients with higher baseline CD4 [hazard ratio, 55.47 (95% confidence interval: 7.36 to 418.20), P < 0.001 versus baseline CD4 ≥500 cells/μL]. Cumulative probabilities of confirmed CD4 <200 was also higher in patients with baseline CD4 200-249 cells per microliter compared with patients with higher baseline CD4. There was no significant difference in time to confirmed CD4 <200 between biannual and annual CD4 measurement (P = 0.336).
CONCLUSIONS: Annual CD4 monitoring in virally suppressed HIV patients with a baseline CD4 ≥250 cells per microliter may be sufficient for clinical management.
METHODS: Data on children with perinatally acquired HIV aged <18 years on first-line, non-nucleoside reverse transcriptase inhibitor-based cART with viral suppression (two consecutive pVL <400 copies/mL over a six-month period) were included from a regional cohort study; those exposed to prior mono- or dual antiretroviral treatment were excluded. Frequency of pVL monitoring was determined at the site-level based on the median rate of pVL measurement: annual 0.75 to 1.5, and semi-annual >1.5 tests/patient/year. Treatment failure was defined as virologic failure (two consecutive pVL >1000 copies/mL), change of antiretroviral drug class, or death. Baseline was the date of the second consecutive pVL <400 copies/mL. Competing risk regression models were used to identify predictors of treatment failure.
RESULTS: During January 2008 to March 2015, there were 1220 eligible children from 10 sites that performed at least annual pVL monitoring, 1042 (85%) and 178 (15%) were from sites performing annual (n = 6) and semi-annual pVL monitoring (n = 4) respectively. Pre-cART, 675 children (55%) had World Health Organization clinical stage 3 or 4, the median nadir CD4 percentage was 9%, and the median pVL was 5.2 log10 copies/mL. At baseline, the median age was 9.2 years, 64% were on nevirapine-based regimens, the median cART duration was 1.6 years, and the median CD4 percentage was 26%. Over the follow-up period, 258 (25%) CLWH with annual and 40 (23%) with semi-annual pVL monitoring developed treatment failure, corresponding to incidence rates of 5.4 (95% CI: 4.8 to 6.1) and 4.3 (95% CI: 3.1 to 5.8) per 100 patient-years of follow-up respectively (p = 0.27). In multivariable analyses, the frequency of pVL monitoring was not associated with treatment failure (adjusted hazard ratio: 1.12; 95% CI: 0.80 to 1.59).
CONCLUSIONS: Annual compared to semi-annual pVL monitoring was not associated with an increased risk of treatment failure in our cohort of virally suppressed children with perinatally acquired HIV on first-line NNRTI-based cART.
METHODS: CLHIV aged <18 years, who were on first-line cART for ≥12 months, and had virological suppression (two consecutive plasma viral load [pVL] <50 copies/mL) were included. Those who started treatment with mono/dual antiretroviral therapy, had a history of treatment interruption >14 days, or received treatment and care at sites with a pVL lower limit of detection >50 copies/mL were excluded. LLV was defined as a pVL 50 to 1000 copies/mL, and VF as a single pVL >1000 copies/mL. Baseline was the time of the second pVL
METHODS: The HIV-CAUSAL Collaboration consisted of 12 cohorts from the United States and Europe of HIV-positive, ART-naive, AIDS-free individuals aged ≥18 years with baseline CD4 cell count and HIV RNA levels followed up from 1996 through 2007. We estimated hazard ratios (HRs) for cART versus no cART, adjusted for time-varying CD4 cell count and HIV RNA level via inverse probability weighting.
RESULTS: Of 65 121 individuals, 712 developed tuberculosis over 28 months of median follow-up (incidence, 3.0 cases per 1000 person-years). The HR for tuberculosis for cART versus no cART was 0.56 (95% confidence interval [CI], 0.44-0.72) overall, 1.04 (95% CI, 0.64-1.68) for individuals aged >50 years, and 1.46 (95% CI, 0.70-3.04) for people with a CD4 cell count of <50 cells/μL. Compared with people who had not started cART, HRs differed by time since cART initiation: 1.36 (95% CI, 0.98-1.89) for initiation <3 months ago and 0.44 (95% CI, 0.34-0.58) for initiation ≥3 months ago. Compared with people who had not initiated cART, HRs <3 months after cART initiation were 0.67 (95% CI, 0.38-1.18), 1.51 (95% CI, 0.98-2.31), and 3.20 (95% CI, 1.34-7.60) for people <35, 35-50, and >50 years old, respectively, and 2.30 (95% CI, 1.03-5.14) for people with a CD4 cell count of <50 cells/μL.
CONCLUSIONS: Tuberculosis incidence decreased after cART initiation but not among people >50 years old or with CD4 cell counts of <50 cells/μL. Despite an overall decrease in tuberculosis incidence, the increased rate during 3 months of ART suggests unmasking IRIS.