METHODS AND RESULTS: The 'COlchicine for the Prevention of Perioperative Atrial Fibrillation' (COP-AF) trial is an international, blinded, randomized trial that compares colchicine to placebo in patients aged at least 55 years and undergoing major noncardiac thoracic surgery with general anesthesia. Exclusion criteria include a history of AF and a contraindication to colchicine (eg, severe renal dysfunction). Oral colchicine at a dose of 0.5 mg or matching placebo is given within 4 hours before surgery. Thereafter, patients receive colchicine 0.5 mg or placebo twice daily for a total of 10 days. The 2 independent co-primary outcomes are clinically important perioperative AF (including atrial flutter) and MINS during 14 days of follow-up. The main safety outcomes are sepsis or infection and non-infectious diarrhea. We aim to enroll 3,200 patients from approximately 40 sites across 11 countries to have at least 80% power for the independent evaluation of the 2 co-primary outcomes. The COP-AF main results are expected in 2023.
CONCLUSIONS: COP-AF is a large randomized and blinded trial designed to determine whether colchicine reduces the risk of perioperative AF or MINS in patients who have major noncardiac thoracic surgery.
OBJECTIVE: This study aimed to perform 2 multiclass relation extraction tasks of Biomedical Natural Language Processing Workshop 2019 Open Shared Tasks: relation extraction of Bacteria-Biotope (BB-rel) task and binary relation extraction of plant seed development (SeeDev-binary) task. In essence, these 2 tasks are aimed at extracting the relation between annotated entity pairs from biomedical texts, which is a challenging problem.
METHODS: Traditional research methods adopted feature- or kernel-based methods and achieved good performance. For these tasks, we propose a deep learning model based on a combination of several distributed features, such as domain-specific word embedding, part-of-speech embedding, entity-type embedding, distance embedding, and position embedding. The multi-head attention mechanism is used to extract the global semantic features of an entire sentence. Meanwhile, we introduced a dependency-type feature and the shortest dependency path connecting 2 candidate entities in the syntactic dependency graph to enrich the feature representation.
RESULTS: Experiments show that our proposed model has excellent performance in biomedical relation extraction, achieving F1 scores of 65.56% and 38.04% on the test sets of the BB-rel and SeeDev-binary tasks. Especially in the SeeDev-binary task, the F1 score of our model is superior to that of other existing models and achieves state-of-the-art performance.
CONCLUSIONS: We demonstrated that the multi-head attention mechanism can learn relevant syntactic and semantic features in different representation subspaces and different positions to extract comprehensive feature representation. Moreover, syntactic dependency features can improve the performance of the model by learning dependency relation between the entities in biomedical texts.