METHODS: A cross-sectional study, using the 'Hospital Survey on Patient Safety Culture (HSOPSC)' questionnaire was carried out in 2018 in SGH. Random sampling was used to select a wide range of staff in SGH. A self-administered questionnaire was distributed to 500 hospital staff consisting of doctors, nurses, pharmacist and other clinical and non-clinical staff, conducted from March to April 2018. A total of 407 respondents successfully completed the questionnaire. Therefore, the final response rate for the survey was 81.4%. This study used SPSS 22.0 for Windows and Hospital Data Entry and Analysis Tool that works with Microsoft Excel developed by United States Agency for Healthcare Research and Quality (AHRQ) to perform statistical analysis on the survey data.
RESULTS: Majority of the respondents graded the overall patient safety as acceptable (63.1%) while only 3.4% graded as excellent. The overall patient safety score was 50.1% and most of the scores related to dimensions were lower than the benchmark scores (64.8%). Generally, the mean positive response rate for all the dimensions were lower than composite data of AHRQ, except for "Organizational Learning - Continuous Improvement", which is also the highest positive response rate (80%), higher than AHRQ data (73%). The result showed that SGH has a good opportunity to improve over time as it gains experience and accumulates knowledge. On the other hand, the lowest percentage of positive responses was "Non-punitive response to error" (18%), meaning that most of the staff perceived that they will be punished for medical error.
CONCLUSIONS: The level of patient safety culture in SGH is acceptable and most of the scores related to dimensions were lower than benchmark score. SGH as a learning organisation should also address the issues of staffing, improving handoff and transition and develop a non-punitive culture in response to error.
Methods: A retrospective analysis of 2,076 lap cholecystectomy procedures performed at the Department of Surgical Gastroenterology at a tertiary referral centre in Northern India was conducted and incidental malignancy in gallbladder polyps analysed. The 8th edition of the American Joint Committee on Cancer for tumour-node-metastasis (TNM) staging of gallbladder carcinoma was used.
Results: Of 54 patients with gallbladder polyps, 53 had benign histology and one had malignant cells in the lamina propria suggestive of T1a adenocarcinoma. The patient with the malignant polyp was older (57 years old) than the patients in the non-cancer group, which had a mean age of 45 (P = 0.039). The size of the malignant polyp was approximately 4 mm, significantly smaller than the average 7.9 mm size of the benign polys (P = 0.031).
Conclusion: Cholecystectomy needs to be considered early in the management of small-sized gallbladder polyps, particularly in areas endemic for gallbladder carcinoma.
Methods: A cross-sectional study was used. Totally, 427 samples of dissimilar Thai-Muslim healthy blood donors living in three southern border provinces were selected via simple random sampling (aged 17-65 years old) and donors found to be positive for infectious markers were excluded. All samples were analysed for JK*A and JK*B alleles using PCR-SSP. The Pearson's chi-squared and Fisher exact tests were used to compare the JK frequencies among southern Thai-Muslim with those among other populations previously reported.
Results: A total of 427 donors-315 males and 112 females, with a median age of 29 years (interquartile range: 18 years)-were analysed. A JK*A/JK*B genotype was the most common, and the JK*A and JK*B allele frequencies among the southern Thai-Muslims were 55.2% and 44.8%, respectively. Their frequencies significantly differed from those of the central Thai, Korean, Japanese, Brazilian-Japanese, Chinese, Filipino, Africans and American Natives populations (P < 0.05). Predicted JK phenotypes were compared with different groups of Malaysians. The Jk(a+b+) phenotype frequency among southern Thai-Muslims was significantly higher than that of Malaysian Malays and Indians (P < 0.05).
Conclusions: The JK*A and JK*B allele frequencies in a southern Thai-Muslim population were determined, which can be applied not only to solve problems in transfusion medicine but also to provide tools for genetic anthropology and population studies.
AIM: To compare the quality of CT brain images produced by a fixed CT scanner and a portable CT scanner (CereTom).
METHODS: This work was a single-centre retrospective study of CT brain images from 112 neurosurgical patients. Hounsfield units (HUs) of the images from CereTom were measured for air, water and bone. Three assessors independently evaluated the images from the fixed CT scanner and CereTom. Streak artefacts, visualisation of lesions and grey-white matter differentiation were evaluated at three different levels (centrum semiovale, basal ganglia and middle cerebellar peduncles). Each evaluation was scored 1 (poor), 2 (average) or 3 (good) and summed up to form an ordinal reading of 3 to 9.
RESULTS: HUs for air, water and bone from CereTom were within the recommended value by the American College of Radiology (ACR). Streak artefact evaluation scores for the fixed CT scanner was 8.54 versus 7.46 (Z = -5.67) for CereTom at the centrum semiovale, 8.38 (SD = 1.12) versus 7.32 (SD = 1.63) at the basal ganglia and 8.21 (SD = 1.30) versus 6.97 (SD = 2.77) at the middle cerebellar peduncles. Grey-white matter differentiation showed scores of 8.27 (SD = 1.04) versus 7.21 (SD = 1.41) at the centrum semiovale, 8.26 (SD = 1.07) versus 7.00 (SD = 1.47) at the basal ganglia and 8.38 (SD = 1.11) versus 6.74 (SD = 1.55) at the middle cerebellar peduncles. Visualisation of lesions showed scores of 8.86 versus 8.21 (Z = -4.24) at the centrum semiovale, 8.93 versus 8.18 (Z = -5.32) at the basal ganglia and 8.79 versus 8.06 (Z = -4.93) at the middle cerebellar peduncles. All results were significant with P-value < 0.01.
CONCLUSIONS: Results of the study showed a significant difference in image quality produced by the fixed CT scanner and CereTom, with the latter being more inferior than the former. However, HUs of the images produced by CereTom do fulfil the recommendation of the ACR.
Methods: Sixty-four patients aged 18-60 years, American Society of Anaesthesiologists (ASA) class I-II who underwent elective surgery were randomised to a Marsh group (n= 32) or Schnider group (n= 32). All the patients received a 1 μg/kg loading dose of dexmedetomidine, followed by TCI anaesthesia with remifentanil at 2 ng/mL. After the effect-site concentration (Ce) of remifentanil reached 2 ng/mL, propofol TCI induction was started. Anaesthesia induction commenced in the Marsh group at a target plasma concentration (Cpt) of 2 μg/mL, whereas it started in the Schnider group at a target effect-site concentration (Cet) of 2 μg/mL. If induction was delayed after 3 min, the target concentration (Ct) was gradually increased to 0.5 μg/mL every 30 sec until successful induction. The Ct at successful induction, induction time, Ce at successful induction and haemodynamic parameters were recorded.
Results: The Ct for successful induction in the Schnider group was significantly lower than in the Marsh group (3.48 [0.90] versus 4.02 [0.67] μg/mL;P= 0.01). The induction time was also shorter in the Schnider group as compared with the Marsh group (134.96 [50.91] versus 161.59 [39.64]) sec;P= 0.02). There were no significant differences in haemodynamic parameters and Ce at successful induction.
Conclusion: In the between-group comparison, dexmedetomidine reduced the Ct requirement for induction and shortened the induction time in the Schnider group. The inclusion of baseline groups without dexmedetomidine in a four-arm comparison of the two models would enhance the validity of the findings.
METHODS: Five graph models were fit using data from 1574 people who inject drugs in Hartford, CT, USA. We used a degree-corrected stochastic block model, based on goodness-of-fit, to model networks of injection drug users. We simulated transmission of HCV and HIV through this network with varying levels of HCV treatment coverage (0%, 3%, 6%, 12%, or 24%) and varying baseline HCV prevalence in people who inject drugs (30%, 60%, 75%, or 85%). We compared the effectiveness of seven treatment-as-prevention strategies on reducing HCV prevalence over 10 years and 20 years versus no treatment. The strategies consisted of treatment assigned to either a randomly chosen individual who injects drugs or to an individual with the highest number of injection partners. Additional strategies explored the effects of treating either none, half, or all of the injection partners of the selected individual, as well as a strategy based on respondent-driven recruitment into treatment.
FINDINGS: Our model estimates show that at the highest baseline HCV prevalence in people who inject drugs (85%), expansion of treatment coverage does not substantially reduce HCV prevalence for any treatment-as-prevention strategy. However, when baseline HCV prevalence is 60% or lower, treating more than 120 (12%) individuals per 1000 people who inject drugs per year would probably eliminate HCV within 10 years. On average, assigning treatment randomly to individuals who inject drugs is better than targeting individuals with the most injection partners. Treatment-as-prevention strategies that treat additional network members are among the best performing strategies and can enhance less effective strategies that target the degree (ie, the highest number of injection partners) within the network.
INTERPRETATION: Successful HCV treatment as prevention should incorporate the baseline HCV prevalence and will achieve the greatest benefit when coverage is sufficiently expanded.
FUNDING: National Institute on Drug Abuse.