METHODS: ARCHERY is a non-randomised prospective study to evaluate the quality and economic impact of AI-based automated radiotherapy treatment planning for cervical, head and neck, and prostate cancers, which are endemic in LMICs, and for which radiotherapy is the primary curative treatment modality. The sample size of 990 patients (330 for each cancer type) has been calculated based on an estimated 95% treatment plan acceptability rate. Time and cost savings will be analysed as secondary outcome measures using the time-driven activity-based costing model. The 48-month study will take place in six public sector cancer hospitals in India (n=2), Jordan (n=1), Malaysia (n=1) and South Africa (n=2) to support implementation of the software in LMICs.
ETHICS AND DISSEMINATION: The study has received ethical approval from University College London (UCL) and each of the six study sites. If the study objectives are met, the AI-based software will be offered as a not-for-profit web service to public sector state hospitals in LMICs to support expansion of high quality radiotherapy capacity, improving access to and affordability of this key modality of cancer cure and control. Public and policy engagement plans will involve patients as key partners.
PATIENTS AND METHODS: A total of 7476 patients with routine health check-up data who underwent prostate biopsies from January 2008 to December 2021 in eight referral centres in Asia were screened. After data pre-processing and cleaning, 5037 patients and 117 features were analyzed. Seven AI-based algorithms were tested for feature selection and seven AI-based algorithms were tested for classification, with the best combination applied for model construction. The APAC score was established in the CH cohort and validated in a multi-centre cohort and in each validation cohort to evaluate its generalizability in different Asian regions. The performance of the models was evaluated using area under the receiver operating characteristic curve (ROC), calibration plot, and decision curve analyses.
RESULTS: Eighteen features were involved in the APCA score predicting HGPCa, with some of these markers not previously used in prostate cancer diagnosis. The area under the curve (AUC) was 0.76 (95% CI:0.74-0.78) in the multi-centre validation cohort and the increment of AUC (APCA vs. PSA) was 0.16 (95% CI:0.13-0.20). The calibration plots yielded a high degree of coherence and the decision curve analysis yielded a higher net clinical benefit. Applying the APCA score could reduce unnecessary biopsies by 20.2% and 38.4%, at the risk of missing 5.0% and 10.0% of HGPCa cases in the multi-centre validation cohort, respectively.
CONCLUSIONS: The APCA score based on routine health check-ups could reduce unnecessary prostate biopsies without additional examinations in Asian populations. Further prospective population-based studies are warranted to confirm these results.
METHODS: Eighteen students with prior experience in traditional PDPBL processes participated in the study, divided into three groups to perform PDPBL sessions with various triggers from pharmaceutical chemistry, pharmaceutics, and clinical pharmacy fields, while utilizing chat AI provided by ChatGPT to assist with data searching and problem-solving. Questionnaires were used to collect data on the impact of ChatGPT on students' satisfaction, engagement, participation, and learning experience during the PBL sessions.
RESULTS: The survey revealed that ChatGPT improved group collaboration and engagement during PDPBL, while increasing motivation and encouraging more questions. Nevertheless, some students encountered difficulties understanding ChatGPT's information and questioned its reliability and credibility. Despite these challenges, most students saw ChatGPT's potential to eventually replace traditional information-seeking methods.
CONCLUSIONS: The study suggests that ChatGPT has the potential to enhance PDPBL in pharmacy education. However, further research is needed to examine the validity and reliability of the information provided by ChatGPT, and its impact on a larger sample size.
METHODS: We investigated the existing body of evidence and applied Preferred Reporting Items for Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines to search records in IEEE, Google scholar, and PubMed databases. We identified 65 papers that were published from 2013 to 2022 and these papers cover 67 different studies. The review process was structured according to the medical data that was used for disease detection. We identified six main categories, namely air flow, genetic, imaging, signals, and miscellaneous. For each of these categories, we report both disease detection methods and their performance.
RESULTS: We found that medical imaging was used in 14 of the reviewed studies as data for automated obstructive airway disease detection. Genetics and physiological signals were used in 13 studies. Medical records and air flow were used in 9 and 7 studies, respectively. Most papers were published in 2020 and we found three times more work on Machine Learning (ML) when compared to Deep Learning (DL). Statistical analysis shows that DL techniques achieve higher Accuracy (ACC) when compared to ML. Convolutional Neural Network (CNN) is the most common DL classifier and Support Vector Machine (SVM) is the most widely used ML classifier. During our review, we discovered only two publicly available asthma and COPD datasets. Most studies used private clinical datasets, so data size and data composition are inconsistent.
CONCLUSIONS: Our review results indicate that Artificial Intelligence (AI) can improve both decision quality and efficiency of health professionals during COPD and asthma diagnosis. However, we found several limitations in this review, such as a lack of dataset consistency, a limited dataset and remote monitoring was not sufficiently explored. We appeal to society to accept and trust computer aided airflow obstructive diseases diagnosis and we encourage health professionals to work closely with AI scientists to promote automated detection in clinical practice and hospital settings.
OBJECTIVE: In this study, we investigated whether and how artificial intelligence chatbots facilitate the expression of user emotions, specifically sadness and depression. We also examined cultural differences in the expression of depressive moods among users in Western and Eastern countries.
METHODS: This study used SimSimi, a global open-domain social chatbot, to analyze 152,783 conversation utterances containing the terms "depress" and "sad" in 3 Western countries (Canada, the United Kingdom, and the United States) and 5 Eastern countries (Indonesia, India, Malaysia, the Philippines, and Thailand). Study 1 reports new findings on the cultural differences in how people talk about depression and sadness to chatbots based on Linguistic Inquiry and Word Count and n-gram analyses. In study 2, we classified chat conversations into predefined topics using semisupervised classification techniques to better understand the types of depressive moods prevalent in chats. We then identified the distinguishing features of chat-based depressive discourse data and the disparity between Eastern and Western users.
RESULTS: Our data revealed intriguing cultural differences. Chatbot users in Eastern countries indicated stronger emotions about depression than users in Western countries (positive: P
METHODS: Frontal view intraoral photographs fulfilling selection criteria were collected. Along the gingival margin, the gingival conditions of individual sites were labelled as healthy, diseased, or questionable. Photographs were randomly assigned as training or validation datasets. Training datasets were input into a novel artificial intelligence system and its accuracy in detection of gingivitis including sensitivity, specificity, and mean intersection-over-union were analysed using validation dataset. The accuracy was reported according to STARD-2015 statement.
RESULTS: A total of 567 intraoral photographs were collected and labelled, of which 80% were used for training and 20% for validation. Regarding training datasets, there were total 113,745,208 pixels with 9,270,413; 5,711,027; and 4,596,612 pixels were labelled as healthy, diseased, and questionable respectively. Regarding validation datasets, there were 28,319,607 pixels with 1,732,031; 1,866,104; and 1,116,493 pixels were labelled as healthy, diseased, and questionable, respectively. AI correctly predicted 1,114,623 healthy and 1,183,718 diseased pixels with sensitivity of 0.92 and specificity of 0.94. The mean intersection-over-union of the system was 0.60 and above the commonly accepted threshold of 0.50.
CONCLUSIONS: Artificial intelligence could identify specific sites with and without gingival inflammation, with high sensitivity and high specificity that are on par with visual examination by human dentist. This system may be used for monitoring of the effectiveness of patients' plaque control.