METHODS: A cross-sectional study was conducted in Amman, the capital and largest city of Jordan, using a validated questionnaire. It was distributed to pharmacists working in community and hospital pharmacies using a convenience sampling technique. Descriptive and inferential statistics were performed in this study.
RESULTS: A total of 340 questionnaires distributed, 300 (88%) pharmacists responded. Over 50% of pharmacists claimed that they have sufficient knowledge regarding FDI. Virtually, the overall median (interquartile range) knowledge score was 18 (15-21), approximately 60%. The highest knowledge scores were for alcohol-drug interactions section (66.6%) followed by both common food-drug interactions and the timing of drug intake to food consumption sections with a score of (58.3%) for each, reflecting a suboptimal knowledge of FDIs among the pharmacists.
CONCLUSION: Pharmacists had unsatisfactory knowledge about common FDIs, with no significant difference between hospital and community pharmacists. Therefore, more attention and efforts should be played to improve awareness about potential food-drug interactions.
OBJECTIVES: To evaluate how medical students perceive ChatGPT for educational purposes and to assess its perceived advantages and disadvantages.
METHODS: A cross-sectional study was carried out using a questionnaire with five main domains to explore Jordanian medical students' perceptions, practices, and concerns regarding the ChatGPT. This study was conducted from May to July, 2023, and the data were collected using the convenience sampling technique through Google Forms shared within medical students' Facebook groups. Descriptive statistics summarised participant demographics, while logistic regression identified factors influencing ChatGPT usage. Variables with a P-value ≤ 0.05 in multiple regression were considered statistically significant.
RESULTS: Nearly two-thirds (N = 136, 61.5%) claimed to have knowledge of AI but not in clinical settings. Most participants (88.5%, N = 216) were aware of ChatGPT, with 86.9% (N = 212) agreeing that 'Medical students can benefit from using ChatGPT.' Additionally, 83.2% (N = 203) felt that 'ChatGPT helps students quickly and easily summarize complex information.' Conversely, 78.3% (N = 191) expressed concerns about ChatGPT's potential inaccuracies, with accuracy and reliability cited as primary concerns. Multiple logistic regression showed that younger students (OR = 0.902, P = 0.025) and those with lower proficiency (OR = 0.487, P = 0.007) used ChatGPT more frequently than others.
CONCLUSION: Although the use of the ChatGPT could be more beneficial for aiding students in developing medical knowledge, evidence-based academic regulations should guide its use. Future research should be conducted to examine the enablers and barriers to ChatGPT use in medical education.