METHODS: AI-based chatbots (ie, ChatGPT-3.5, ChatGPT-4, Microsoft Bing AI, and Google Bard) were compared for their abilities to detect clinically relevant DDIs for 255 drug pairs. Descriptive statistics, such as specificity, sensitivity, accuracy, negative predictive value (NPV), and positive predictive value (PPV), were calculated for each tool.
RESULTS: When a subscription tool was used as a reference, the specificity ranged from a low of 0.372 (ChatGPT-3.5) to a high of 0.769 (Microsoft Bing AI). Also, Microsoft Bing AI had the highest performance with an accuracy score of 0.788, with ChatGPT-3.5 having the lowest accuracy rate of 0.469. There was an overall improvement in performance for all the programs when the reference tool switched to a free DDI source, but still, ChatGPT-3.5 had the lowest specificity (0.392) and accuracy (0.525), and Microsoft Bing AI demonstrated the highest specificity (0.892) and accuracy (0.890). When assessing the consistency of accuracy across two different drug classes, ChatGPT-3.5 and ChatGPT-4 showed the highest variability in accuracy. In addition, ChatGPT-3.5, ChatGPT-4, and Bard exhibited the highest fluctuations in specificity when analyzing two medications belonging to the same drug class.
CONCLUSION: Bing AI had the highest accuracy and specificity, outperforming Google's Bard, ChatGPT-3.5, and ChatGPT-4. The findings highlight the significant potential these AI tools hold in transforming patient care. While the current AI platforms evaluated are not without limitations, their ability to quickly analyze potentially significant interactions with good sensitivity suggests a promising step towards improved patient safety.
RESULTS: A total of 221 community pharmacists participated in the current study (response rate was not calculated since opt in recruitment strategies were used). Remarkably, nearly half of the pharmacists (n= 107, 48.4%) indicated a willingness to incorporate the ChatGPT into their pharmacy practice. Nearly half of the pharmacists (n=105, 47.5%) demonstrated a high perceived benefit score for ChatGPT, while around 37% of pharmacists (n= 81) expressed a high concern score about ChatGPT. More than 70% of pharmacists believed that ChatGPT lacked the ability to utilize human judgment and make complicated ethical judgements in its responses (n= 168). Finally, logistics regression analysis showed that pharmacists who had previous experience in using ChatGPT were more willing to integrate ChatGPT in their pharmacy practice than those with no previous experience in using ChatGPT (OR= 2.312, p= 0.035).
CONCLUSION: While pharmacists show a willingness to incorporate ChatGPT into their practice, especially those with prior experience, there are significant concerns. These mainly revolve around the tool's ability to make human-like judgments and ethical decisions. These findings are crucial for the future development and integration of AI tools in pharmacy practice.
METHODS: This is a survey-based cross-sectional study involving the general public of Jordan. The study took place in various Jordanian cities from May 2nd to June 1st, 2023. Using Google forms, the questionnaire was shared through various social media channels (such as Facebook and WhatsApp).
RESULTS: The questionnaire received responses from 800 participants. The data showed that a sizable portion of the Jordanian population were unaware of telepharmacy (n= 343, 42.9%), and a majority had never utilized it (n= 131, 16.4%). The participants viewed the main advantage of telepharmacy as minimizing unnecessary trips to pharmacies (n= 668, 83.5%) and reducing travel time and expenses (n= 632, 79.0%). However, the primary concern was the mental effort required to use this service (n= 465, 58.1%). Of the respondents, 61.3% (n= 490) indicated a willingness to adopt telepharmacy services in the future. Regression analysis indicated that men were more likely to use this service compared to women (OR= 1.947, p<0.001), and people living in northern and southern Jordan exhibited a greater willingness compared to those inhabiting the central region (OR= 2.168, p<0.001).
CONCLUSION: The results reveal a positive attitude towards and a significant readiness to embrace telepharmacy among the Jordanian population. However, for broader acceptance and utilization, apprehensions regarding the service need to be addressed. Doing so could improve access to pharmaceutical care, particularly for patients living in far-flung areas of Jordan.
METHODS: A cross-sectional study was conducted between April and May 2023 to assess PharmD students' perceptions, concerns, and experiences regarding the integration of ChatGPT into clinical pharmacy education. The study utilized a convenient sampling method through online platforms and involved a questionnaire with sections on demographics, perceived benefits, concerns, and experience with ChatGPT. Statistical analysis was performed using SPSS, including descriptive and inferential analyses.
RESULTS: The findings of the study involving 211 PharmD students revealed that the majority of participants were male (77.3%), and had prior experience with artificial intelligence (68.2%). Over two-thirds were aware of ChatGPT. Most students (n= 139, 65.9%) perceived potential benefits in using ChatGPT for various clinical tasks, with concerns including over-reliance, accuracy, and ethical considerations. Adoption of ChatGPT in clinical training varied, with some students not using it at all, while others utilized it for tasks like evaluating drug-drug interactions and developing care plans. Previous users tended to have higher perceived benefits and lower concerns, but the differences were not statistically significant.
CONCLUSION: Utilizing ChatGPT in clinical training offers opportunities, but students' lack of trust in it for clinical decisions highlights the need for collaborative human-ChatGPT decision-making. It should complement healthcare professionals' expertise and be used strategically to compensate for human limitations. Further research is essential to optimize ChatGPT's effective integration.