Displaying publications 41 - 60 of 1702 in total

Abstract:
Sort:
  1. Coromilas EJ, Kochav S, Goldenthal I, Biviano A, Garan H, Goldbarg S, et al.
    Circ Arrhythm Electrophysiol, 2021 03;14(3):e009458.
    PMID: 33554620 DOI: 10.1161/CIRCEP.120.009458
    [Figure: see text].
    Matched MeSH terms: Time Factors
  2. Lim HM, Teo CH, Ng CJ, Chiew TK, Ng WL, Abdullah A, et al.
    JMIR Med Inform, 2021 Feb 26;9(2):e23427.
    PMID: 33600345 DOI: 10.2196/23427
    BACKGROUND: During the COVID-19 pandemic, there was an urgent need to develop an automated COVID-19 symptom monitoring system to reduce the burden on the health care system and to provide better self-monitoring at home.

    OBJECTIVE: This paper aimed to describe the development process of the COVID-19 Symptom Monitoring System (CoSMoS), which consists of a self-monitoring, algorithm-based Telegram bot and a teleconsultation system. We describe all the essential steps from the clinical perspective and our technical approach in designing, developing, and integrating the system into clinical practice during the COVID-19 pandemic as well as lessons learned from this development process.

    METHODS: CoSMoS was developed in three phases: (1) requirement formation to identify clinical problems and to draft the clinical algorithm, (2) development testing iteration using the agile software development method, and (3) integration into clinical practice to design an effective clinical workflow using repeated simulations and role-playing.

    RESULTS: We completed the development of CoSMoS in 19 days. In Phase 1 (ie, requirement formation), we identified three main functions: a daily automated reminder system for patients to self-check their symptoms, a safe patient risk assessment to guide patients in clinical decision making, and an active telemonitoring system with real-time phone consultations. The system architecture of CoSMoS involved five components: Telegram instant messaging, a clinician dashboard, system administration (ie, back end), a database, and development and operations infrastructure. The integration of CoSMoS into clinical practice involved the consideration of COVID-19 infectivity and patient safety.

    CONCLUSIONS: This study demonstrated that developing a COVID-19 symptom monitoring system within a short time during a pandemic is feasible using the agile development method. Time factors and communication between the technical and clinical teams were the main challenges in the development process. The development process and lessons learned from this study can guide the future development of digital monitoring systems during the next pandemic, especially in developing countries.

    Matched MeSH terms: Time Factors
  3. Mohd-Ali B, Chen LY
    Cont Lens Anterior Eye, 2021 02;44(1):72-75.
    PMID: 32624364 DOI: 10.1016/j.clae.2020.06.007
    PURPOSE: To analyse and compare the alterations in corneal endothelium morphology induced by different materials and durations of wearing soft contact lenses (CL) among young adults living in Kuala Lumpur.

    METHODS: Healthy soft CL wearers were invited to participate in this study. Visual acuity (VA) was measured using the Snellen chart, and subjective refraction was performed using cross-cylinder technique. Standard ocular assessments were conducted using a slit lamp biomicroscope and morphology of corneal endothelial cells (endothelial cell density, ECD, coefficient variation, COV, hexagonality, HEX and central corneal thickness, CCT) were evaluated using a non-contact specular microscope. Statistical analysis was conducted using ANOVA and data from the right eye only is included.

    RESULTS: A total of 72 subjects (32 SiHy and 40 HCL wearers) and 24 non-CL wearers (control) participated in this study. The gender distribution for study subjects was 13 males and 59 females, with a mean age 22.15 ± 1.84 years old. The mean refraction was -1.86 ± 1.25DS. The duration of wearing CL ranged from 1 to 9 years. Subjects were later divided into 2 groups following duration of CL wear: Group 1 (<5 years) and Group 2 (≥5 years) for analysis purposes. Statistical analysis showed significant alterations in ECD, COV and HEX of CL wearers (p 

    Matched MeSH terms: Time Factors
  4. Nair RS, Billa N, Leong CO, Morris AP
    Pharm Dev Technol, 2021 Feb;26(2):243-251.
    PMID: 33274672 DOI: 10.1080/10837450.2020.1860087
    Tocotrienol (TRF) ethosomes were developed and evaluated in vitro for potential transdermal delivery against melanoma. The optimised TRF ethosomal size ranged between 64.9 ± 2.2 nm to 79.6 ± 3.9 nm and zeta potential (ZP) between -53.3 mV to -62.0 ± 2.6 mV. Characterisation of the ethosomes by ATR-FTIR indicated the successful formation of TRF-ethosomes. Scanning electron microscopy (SEM) images demonstrated the spherical shape of ethosomes, and the entrapment efficiencies of all the formulations were above 66%. In vitro permeation studies using full-thickness human skin showed that the permeation of gamma-T3 from the TRF ethosomal formulations was significantly higher (p 
    Matched MeSH terms: Time Factors
  5. Wong LP, Alias H
    J Behav Med, 2021 Feb;44(1):18-28.
    PMID: 32757088 DOI: 10.1007/s10865-020-00172-z
    Monitoring public psychological and behavioural responses during the early phase of the coronavirus disease 2019 (COVID-19) outbreak is important for the management and control of infection. This study aims to investigate the temporal trend in (1) avoidance and protective behaviors, (2) fear, (3) socio-economic impact, and (4) anxiety levels during the early phase of the COVID-19 pandemic. As a high level of anxiety may have a detrimental impact during an infectious disease outbreak, factors associated with anxiety were also explored. The survey was carried out for 10 weeks and the responses were divided into three periods of around 3 weeks: 25 January-21 February, 22 February-17 March and 18 March-3 April (the period the Malaysian Government issued Movement Control Order). Findings revealed that most of the pyschobehavioural variables showed small increases during first (25 January-21 February) and second (22 February-17 March) periods, and high psychobehavioral responses were reported during the third period. A total of 72.1% (95%CI = 69.2-75.0) reported moderate to severe anxiety as measured by the State-Trait Anxiety Inventory. Factor influencing moderate to severe anxiety is a high perception of severity (OR = 2.09; 95%CI = 1.48-2.94), high perceived susceptibility (OR = 1.71; 95%CI = 1.17-2.50), high impact score (OR = 1.63; 95%CI = 1.17-2.26) and high fear score (OR = 1.47; 95%CI = 1.01-2.14). In conclusion, the psychological and behavioural responses were found to increase with the progression of the outbreak. High anxiety levels found in this study warrant provision of mental health intervention during the early phase of COVID-19 outbreak.
    Matched MeSH terms: Time Factors
  6. Rosenthal VD, Bat-Erdene I, Gupta D, Rajhans P, Myatra SN, Muralidharan S, et al.
    J Vasc Access, 2021 Jan;22(1):34-41.
    PMID: 32406328 DOI: 10.1177/1129729820917259
    BACKGROUND: Short-term peripheral venous catheter-associated bloodstream infection rates have not been systematically studied in Asian countries, and data on peripheral venous catheter-associated bloodstream infections incidence by number of short-term peripheral venous catheter days are not available.

    METHODS: Prospective, surveillance study on peripheral venous catheter-associated bloodstream infections conducted from 1 September 2013 to 31 May 2019 in 262 intensive care units, members of the International Nosocomial Infection Control Consortium, from 78 hospitals in 32 cities of 8 countries in the South-East Asia Region: China, India, Malaysia, Mongolia, Nepal, Philippines, Thailand, and Vietnam. For this research, we applied definition and criteria of the CDC NHSN, methodology of the INICC, and software named INICC Surveillance Online System.

    RESULTS: We followed 83,295 intensive care unit patients for 369,371 bed-days and 376,492 peripheral venous catheter-days. We identified 999 peripheral venous catheter-associated bloodstream infections, amounting to a rate of 2.65/1000 peripheral venous catheter-days. Mortality in patients with peripheral venous catheter but without peripheral venous catheter-associated bloodstream infections was 4.53% and 12.21% in patients with peripheral venous catheter-associated bloodstream infections. The mean length of stay in patients with peripheral venous catheter but without peripheral venous catheter-associated bloodstream infections was 4.40 days and 7.11 days in patients with peripheral venous catheter and peripheral venous catheter-associated bloodstream infections. The microorganism profile showed 67.1% were Gram-negative bacteria: Escherichia coli (22.9%), Klebsiella spp (10.7%), Pseudomonas aeruginosa (5.3%), Enterobacter spp. (4.5%), and others (23.7%). The predominant Gram-positive bacteria were Staphylococcus aureus (11.4%).

    CONCLUSIONS: Infection prevention programs must be implemented to reduce the incidence of peripheral venous catheter-associated bloodstream infections.

    Matched MeSH terms: Time Factors
  7. Tan-Loh J, Cheong BMK
    Med J Malaysia, 2021 01;76(1):24-28.
    PMID: 33510104
    INTRODUCTION: COVID-19 is a highly transmissible respiratory virus that has affected millions of people worldwide in the span of months. The burden of disease among healthcare workers (HCW) has not been well studied despite reports of infectivity and transmission around the world. Two HCW in Hospital Teluk Intan (HTI) contracted COVID-19 while attending a social event. They were in close proximity with colleagues upon returning to work, resulting in the spread of infection among other HCW in HTI.

    OBJECTIVE: The objectives of this paper are to gain a better understanding of the key presenting symptoms of COVID-19 in HCWs in a district specialist hospital, to establish the proportion of symptomatic COVID-19 cases among HCWs and its severity and to determine the time taken from onset of symptoms or perceived exposure to diagnostic testing.

    METHODOLOGY: This is a retrospective descriptive analysis of clinical characteristics of subjects infected with COVID-19 among HCW in HTI. Their demography and clinical characteristics were recorded.

    RESULTS: There were 47 HCW in HTI who tested positive for COVID-19. The mean age of the patients was 37.5 years old. 7 patients (15.2%) had at least more than one comorbidity. Average duration of time from perceived close contact to onset of symptom was 4.5 days, while the mean duration of time from symptoms to first positive RT-PCR result was 3.4 days. Six patients (13.0%) were asymptomatic throughout, whereas 40 (87.0%) had at least one symptom prior to hospitalization. The most commonly reported symptoms were fever (65.2%), sore throat (39.1%) and cough (37.0%). In terms of severity of symptoms, the majority of patients experienced mild symptoms (Group 2, 52.2%). Two patients (4.3%) with multiple comorbidities had severe disease requiring ICU admission and mechanical ventilation. There were no mortalities, and the longest staying patient was hospitalized for 18 days. The high rates of infectivity among HCW in HTI can be attributed to working in close proximity while in the asymptomatic incubation phase, while no HCW directly involved in the care of COVID-19 positive patients were tested positive.

    CONCLUSION: We report that HCW share similar clinical characteristics of COVID-19 infection as those of non HCW patients in earlier studies. The infection can spread rapidly within healthcare settings via close contacts among infected HCWs. As such, we advocate distancing when working and usage of personal protective equipment when treating patients with respiratory illness to reduce transmission of COVID-19.

    Matched MeSH terms: Time Factors
  8. Siow SL, Mahendran HA, Najmi WD, Lim SY, Hashimah AR, Voon K, et al.
    Asian J Surg, 2021 Jan;44(1):158-163.
    PMID: 32423838 DOI: 10.1016/j.asjsur.2020.04.007
    BACKGROUND: To evaluate the clinical outcomes and satisfaction of patients following laparoscopic Heller myotomy for achalasia cardia in four tertiary centers.

    METHODS: Fifty-five patients with achalasia cardia who underwent laparoscopic Heller myotomy between 2010 and 2019 were enrolled. The adverse events and clinical outcomes were analyzed. Overall patient satisfaction was also reviewed.

    RESULTS: The mean operative time was 144.1 ± 38.33 min with no conversions to open surgery in this series. Intraoperative adverse events occurred in 7 (12.7%) patients including oesophageal mucosal perforation (n = 4), superficial liver injury (n = 1), minor bleeding from gastro-oesophageal fat pad (n = 1) & aspiration during induction requiring bronchoscopy (n = 1). Mean time to normal diet intake was 3.2 ± 2.20 days. Mean postoperative stay was 4.9 ± 4.30 days and majority of patients (n = 46; 83.6%) returned to normal daily activities within 2 weeks after surgery. The mean follow-up duration was 18.8 ± 13.56 months. Overall, clinical success (Eckardt ≤ 3) was achieved in all 55 (100%) patients, with significant improvements observed in all elements of the Eckardt score. Thirty-seven (67.3%) patients had complete resolution of dysphagia while the remaining 18 (32.7%) patients had some occasional dysphagia that was tolerable and did not require re-intervention. Nevertheless, all patients reported either very satisfied or satisfied and would recommend the procedure to another person.

    CONCLUSIONS: Laparoscopic Heller myotomy and anterior Dor is both safe and effective as a definitive treatment for treating achalasia cardia. It does have a low rate of oesophageal perforation but overall has a high degree of patient satisfaction with minimal complications.

    Matched MeSH terms: Time Factors
  9. Chiam PTL, Hayashida K, Watanabe Y, Yin WH, Kao HL, Lee MKY, et al.
    Open Heart, 2021 01;8(1).
    PMID: 33419935 DOI: 10.1136/openhrt-2020-001541
    OBJECTIVES: Transcatheter aortic valve replacement (TAVR) is increasingly performed. Physically small Asians have smaller aortic root and peripheral vessel anatomy. The influence of gender of Asian patients undergoing TAVR is unknown and may affect outcomes. The aim of this study was to assess sex differences in Asian patients undergoing TAVR.

    METHODS: Patients undergoing TAVR from eight countries were enrolled. In this retrospective analysis, we examined differences in characteristics, 30-day clinical outcomes and 1-year survival between female and male Asian patients.

    RESULTS: Eight hundred and seventy-three patients (54.4% women) were included. Women were older, smaller and had less coronary artery and lung disease but tended to have higher logistic EuroSCOREs. Smaller prostheses were used more often in women. Major vascular complications occurred more frequently in women (5.5% vs 1.8%, p<0.01); however, 30-day stroke and mortality (women vs men: 1.5% vs 1.6%, p=0.95% and 4.3% vs 3.4%, p=0.48) were similar. Functional status improvement was significant and comparable between the sexes. Conduction disturbance and permanent pacemaker requirements (11.2% vs 9.0%, p=0.52) were also similar as was 1-year survival (women vs men: 85.6% vs 88.2%, p=0.25). The only predictors of 30-day mortality were major vascular injury in women and age in men.

    CONCLUSIONS: Asian women had significantly smaller stature and anatomy with some differences in clinical profiles. Despite more frequent major vascular complications, women had similar 30-day stroke or mortality rates. Functional status improvement was significant and comparable between the sexes. Conduction disturbance and permanent pacemaker requirements were similar as was 1-year survival.

    Matched MeSH terms: Time Factors
  10. Albahri AS, Hamid RA, Albahri OS, Zaidan AA
    Artif Intell Med, 2021 Jan;111:101983.
    PMID: 33461683 DOI: 10.1016/j.artmed.2020.101983
    CONTEXT AND BACKGROUND: Corona virus (COVID) has rapidly gained a foothold and caused a global pandemic. Particularists try their best to tackle this global crisis. New challenges outlined from various medical perspectives may require a novel design solution. Asymptomatic COVID-19 carriers show different health conditions and no symptoms; hence, a differentiation process is required to avert the risk of chronic virus carriers.

    OBJECTIVES: Laboratory criteria and patient dataset are compulsory in constructing a new framework. Prioritisation is a popular topic and a complex issue for patients with COVID-19, especially for asymptomatic carriers due to multi-laboratory criteria, criterion importance and trade-off amongst these criteria. This study presents new integrated decision-making framework that handles the prioritisation of patients with COVID-19 and can detect the health conditions of asymptomatic carriers.

    METHODS: The methodology includes four phases. Firstly, eight important laboratory criteria are chosen using two feature selection approaches. Real and simulation datasets from various medical perspectives are integrated to produce a new dataset involving 56 patients with different health conditions and can be used to check asymptomatic cases that can be detected within the prioritisation configuration. The first phase aims to develop a new decision matrix depending on the intersection between 'multi-laboratory criteria' and 'COVID-19 patient list'. In the second phase, entropy is utilised to set the objective weight, and TOPSIS is adapted to prioritise patients in the third phase. Finally, objective validation is performed.

    RESULTS: The patients are prioritised based on the selected criteria in descending order of health situation starting from the worst to the best. The proposed framework can discriminate among mild, serious and critical conditions and put patients in a queue while considering asymptomatic carriers. Validation findings revealed that the patients are classified into four equal groups and showed significant differences in their scores, indicating the validity of ranking.

    CONCLUSIONS: This study implies and discusses the numerous benefits of the suggested framework in detecting/recognising the health condition of patients prior to discharge, supporting the hospitalisation characteristics, managing patient care and optimising clinical prediction rule.

    Matched MeSH terms: Time Factors
  11. Qamruddin I, Alam MK, Mahroof V, Karim M, Fida M, Khamis MF, et al.
    Pain Res Manag, 2021;2021:6624723.
    PMID: 34035871 DOI: 10.1155/2021/6624723
    Objective: Low-intensity pulsed ultrasound (LIPUS) is a noninvasive modality to stimulate bone remodeling (BR) and the healing of hard and soft tissues. This research evaluates the biostimulatory effect of LIPUS on the rate of orthodontic tooth movement (OTM) and associated pain, when applied at 3-week intervals.

    Methods: Twenty-two patients (11 males and 11 females; mean age 19.18 ± 2.00 years) having Angle's Class II division 1 malocclusion needing bilateral extractions of maxillary first bicuspids were recruited for this split-mouth randomized clinical trial. After the initial stage of alignment and leveling with contemporary edgewise MBT (McLaughlin-Bennett-Trevisi) prescription brackets (Ortho Organizers, Carlsbad, Calif) of 22 mil, followed by extractions of premolars bilaterally, 6 mm nickel-titanium spring was used to retract the canines separately by applying 150 g force on 0.019 × 0.025-in stainless steel working archwires. LIPUS (1.1 MHz frequency and 30 mW/cm2 intensity output) was applied for 20 minutes extraorally and reapplied after 3 weeks for 2 more successive visits over the root of maxillary canine on the experimental side whereas the other side was placebo. A numerical rating scale- (NRS-) based questionnaire was given to the patients on each visit to record their weekly pain experience. Impressions were also made at each visit before the application of LIPUS (T1, T2, and T3). Models were scanned with a CAD/CAM scanner (Planmeca, Helsinki, Finland). Mann-Whitney U test was applied for comparison of canine movement and pain intensity between both the groups.

    Results: No significant difference in the rate of canine movement was found among the experimental (0.90 mm ± 0.33 mm) and placebo groups (0.81 mm ± 0.32 mm). There was no difference in pain reduction between experimental and placebo groups (p > 0.05).

    Conclusion: Single-dose application of LIPUS at 3-week intervals is ineffective in stimulating the OTM and reducing associated treatment pain.

    Matched MeSH terms: Time Factors
  12. Atia A, Alrawaiq NS, Abdullah A
    Curr Pharm Biotechnol, 2021;22(8):1085-1098.
    PMID: 32988349 DOI: 10.2174/1389201021666200928095950
    BACKGROUND: The most common preparation of tocotrienols is the Tocotrienol-Rich Fraction (TRF). This study aimed to investigate whether TRF induced liver Nrf2 nuclear translocation and influenced the expression of Nrf2-regulated genes.

    METHODS: In the Nrf2 induction study, mice were divided into control, 2000 mg/kg TRF and diethyl maleate treated groups. After acute treatment, mice were sacrificed at specific time points. Liver nuclear extracts were prepared and Nrf2 nuclear translocation was detected through Western blotting. To determine the effect of increasing doses of TRF on the extent of liver nuclear Nrf2 translocation and its implication on the expression levels of several Nrf2-regulated genes, mice were divided into 5 groups (control, 200, 500 and 1000 mg/kg TRF, and butylated hydroxyanisole-treated groups). After 14 days, mice were sacrificed and liver RNA was extracted for qPCR assay.

    RESULTS: 2000 mg/kg TRF administration initiated Nrf2 nuclear translocation within 30 min, reached a maximum level of around 1 h and dropped to half-maximal levels by 24 h. Incremental doses of TRF resulted in dose-dependent increases in liver Nrf2 nuclear levels, along with concomitant dosedependent increases in the expressions of Nrf2-regulated genes.

    CONCLUSION: TRF activated the liver Nrf2 pathway resulting in increased expression of Nrf2-regulated cytoprotective genes.

    Matched MeSH terms: Time Factors
  13. Alakbari FS, Mohyaldinn ME, Ayoub MA, Muhsan AS, Hussein IA
    PLoS One, 2021;16(4):e0250466.
    PMID: 33901240 DOI: 10.1371/journal.pone.0250466
    Sand management is essential for enhancing the production in oil and gas reservoirs. The critical total drawdown (CTD) is used as a reliable indicator of the onset of sand production; hence, its accurate prediction is very important. There are many published CTD prediction correlations in literature. However, the accuracy of most of these models is questionable. Therefore, further improvement in CTD prediction is needed for more effective and successful sand control. This article presents a robust and accurate fuzzy logic (FL) model for predicting the CTD. Literature on 23 wells of the North Adriatic Sea was used to develop the model. The used data were split into 70% training sets and 30% testing sets. Trend analysis was conducted to verify that the developed model follows the correct physical behavior trends of the input parameters. Some statistical analyses were performed to check the model's reliability and accuracy as compared to the published correlations. The results demonstrated that the proposed FL model substantially outperforms the current published correlations and shows higher prediction accuracy. These results were verified using the highest correlation coefficient, the lowest average absolute percent relative error (AAPRE), the lowest maximum error (max. AAPRE), the lowest standard deviation (SD), and the lowest root mean square error (RMSE). Results showed that the lowest AAPRE is 8.6%, whereas the highest correlation coefficient is 0.9947. These values of AAPRE (<10%) indicate that the FL model could predicts the CTD more accurately than other published models (>20% AAPRE). Moreover, further analysis indicated the robustness of the FL model, because it follows the trends of all physical parameters affecting the CTD.
    Matched MeSH terms: Time Factors
  14. Aziz F, Malek S, Ibrahim KS, Raja Shariff RE, Wan Ahmad WA, Ali RM, et al.
    PLoS One, 2021;16(8):e0254894.
    PMID: 34339432 DOI: 10.1371/journal.pone.0254894
    BACKGROUND: Conventional risk score for predicting short and long-term mortality following an ST-segment elevation myocardial infarction (STEMI) is often not population specific.

    OBJECTIVE: Apply machine learning for the prediction and identification of factors associated with short and long-term mortality in Asian STEMI patients and compare with a conventional risk score.

    METHODS: The National Cardiovascular Disease Database for Malaysia registry, of a multi-ethnic, heterogeneous Asian population was used for in-hospital (6299 patients), 30-days (3130 patients), and 1-year (2939 patients) model development. 50 variables were considered. Mortality prediction was analysed using feature selection methods with machine learning algorithms and compared to Thrombolysis in Myocardial Infarction (TIMI) score. Invasive management of varying degrees was selected as important variables that improved mortality prediction.

    RESULTS: Model performance using a complete and reduced variable produced an area under the receiver operating characteristic curve (AUC) from 0.73 to 0.90. The best machine learning model for in-hospital, 30 days, and 1-year outperformed TIMI risk score (AUC = 0.88, 95% CI: 0.846-0.910; vs AUC = 0.81, 95% CI:0.772-0.845, AUC = 0.90, 95% CI: 0.870-0.935; vs AUC = 0.80, 95% CI: 0.746-0.838, AUC = 0.84, 95% CI: 0.798-0.872; vs AUC = 0.76, 95% CI: 0.715-0.802, p < 0.0001 for all). TIMI score underestimates patients' risk of mortality. 90% of non-survival patients are classified as high risk (>50%) by machine learning algorithm compared to 10-30% non-survival patients by TIMI. Common predictors identified for short- and long-term mortality were age, heart rate, Killip class, fasting blood glucose, prior primary PCI or pharmaco-invasive therapy and diuretics. The final algorithm was converted into an online tool with a database for continuous data archiving for algorithm validation.

    CONCLUSIONS: In a multi-ethnic population, patients with STEMI were better classified using the machine learning method compared to TIMI scoring. Machine learning allows for the identification of distinct factors in individual Asian populations for better mortality prediction. Ongoing continuous testing and validation will allow for better risk stratification and potentially alter management and outcomes in the future.

    Matched MeSH terms: Time Factors
  15. Prodhan AHMSU, Cavestro C, Kamal MA, Islam MA
    CNS Neurol Disord Drug Targets, 2021;20(8):736-754.
    PMID: 34348635 DOI: 10.2174/1871527320666210804155617
    Alzheimer’s disease (AD) is a progressive neurodegenerative disorder characterized by sleep, behavioral, memory, and cognitive deteriorations. Sleep disturbance (SD) is a major disease burden in AD, which has a reciprocal relationship with AD pathophysiology. It aggravates memory, behavioral, and cognitive complications in AD. Different studies have found that melatonin hormone levels reduce even in the pre-clinical stages of AD. Melatonin is the primary sleep-regulating hormone and a potent antioxidant with neuroprotective roles. The decrease in melatonin levels can thus promote SD and AD neuropathology. Exogenous melatonin has the potential to alleviate neuropathology and SD in AD by different mechanisms. Various studies have been conducted to assess the efficacy of exogenous melatonin to treat SD in AD. Though most of the studies suggest that melatonin is useful to ameliorate SD in AD, the remaining studies show opposite results. The timing, dosage, and duration of melatonin administration along with disease condition, genetic, environmental, and some other factors can be responsible for the discrepancies between the studies. More extensive trials with longer durations and higher dosage forms and studies including bright light therapy and melatonin agonists (ramelteon, agomelatine, and tasimelteon) should be performed to determine the efficacy of melatonin to treat SD in AD.
    Matched MeSH terms: Time Factors
  16. Mosavat M, Mirsanjari M, Lwaleed BA, Kamarudin M, Omar SZ
    J Diabetes Res, 2021;2021:5533802.
    PMID: 34007846 DOI: 10.1155/2021/5533802
    BACKGROUND: Adipocytokines participate in regulating the inflammatory response in glucose homeostasis and type 2 diabetes. However, among these peptides, the role of adipocyte-specific fatty-acid-binding protein (AFABP), chemerin, and secreted protein acidic and rich in cysteine (SPARC) in gestational diabetes (GDM) has not been fully investigated.

    METHOD: The maternal fasting level of adipocytokines of 53 subjects with GDM and 43 normal pregnant (NGDM) was measured using multiplex immunoassay at 24-28 weeks, before delivery, immediate postpartum, and 2-6 months postpuerperium.

    RESULTS: Higher levels of AFABP were associated with a 3.7-fold higher risk of GDM. Low chemerin levels were associated with a 3.6-fold higher risk of GDM. Interleukin-10 (IL-10) was inversely associated with the risk of GDM. SPARC had no association with GDM. AFABP was directly correlated to interleukin-6 (r = 0.50), insulin resistance index (r = 0.26), and body mass index (r = 0.28) and inversely correlated to C-reactive protein (r = -0.27). Chemerin levels were directly and strongly correlated with IL-10 (r = 0.41) and interleukin-4 (r = 0.50) and inversely correlated to insulin resistance index (r = -0.23) in GDM but not NGDM. In the longitudinal assessment, there were no significant differences in AFABP and chemerin concentrations of both studied groups.

    CONCLUSION: AFABP and chemerin were associated with a higher risk of GDM. These adipocytokines were related to insulin resistance, body mass index, and inflammation in pregnant women diagnosed with GDM.

    Matched MeSH terms: Time Factors
  17. Ling HS, Chung BK, Chua PF, Gan KX, Ho WL, Ong EYL, et al.
    BMC Cardiovasc Disord, 2020 12 07;20(1):511.
    PMID: 33287705 DOI: 10.1186/s12872-020-01793-7
    BACKGROUND: Data on clinical characteristics of acute decompensated heart failure (ADHF) in Malaysia especially in East Malaysia is lacking.

    METHODS: This is a prospective observational study in Sarawak General Hospital, Medical Department, from October 2017 to September 2018. Patients with primary admission diagnosis of ADHF were recruited and followed up for 90 days. Data on patient's characteristics, precipitating factors, medications and short-term clinical outcomes were recorded.

    RESULTS: Majority of the patients were classified in lower socioeconomic group and the mean age was 59 years old. Hypertension, diabetes mellitus and dyslipidaemia were the common underlying comorbidities. Heart failure with ischemic aetiology was the commonest ADHF admission precipitating factor. 48.6% of patients were having preserved ejection fraction HF and the median NT-ProBNP level was 4230 pg/mL. Prescription rate of the evidence-based heart failure medication was low. The in-patient mortality and the average length of hospital stay were 7.5% and 5 days respectively. 43% of patients required either ICU care or advanced cardiopulmonary support. The 30-day, 90-day mortality and readmission rate were 13.1%, 11.2%, 16.8% and 14% respectively.

    CONCLUSION: Comparing with the HF data from West and Asia Pacific, the short-term mortality and readmission rate were high among the ADHF patients in our study cohort. Maladaptation to evidence-based HF prescription and the higher prevalence of cardiovascular risk factors in younger patients were among the possible issues to be addressed to improve the HF outcome in regions with similar socioeconomic background.

    Matched MeSH terms: Time Factors
  18. Tamin NSI, Razalli KA, Sallahuddin SN, Chan HK, Hassan MRA
    Cancer Epidemiol, 2020 12;69:101829.
    PMID: 32998070 DOI: 10.1016/j.canep.2020.101829
    INTRODUCTION: The immunochemical fecal occult blood test (iFOBT) has been widely used for opportunistic colorectal cancer (CRC) screening in average-risk individuals seeking care from public health clinics in Malaysia. This study provides a 5-year outcome evaluation of such a practice.

    METHODS: The findings for a few outcome indicators, ranging from the iFOBT uptake to the CRC and polyp detection rates, were generated from the data contributed by 583 public health clinics between 2014 and 2018. The trends in their changes were also evaluated.

    RESULTS: The iFOBT uptake constantly increased over the years (p < 0.001), totaling 2.29 % (n = 127,957) as at 2018. Nearly 10 % (n = 11,872) of the individuals screened had a positive test result. Of those who underwent colonoscopy (n = 6,491), 4.04 % (n = 262) and 13.93 % (n = 904) were found to have CRC and polyps, respectively.

    CONCLUSION: An uptrend in the CRC screening uptake was witnessed following the introduction of the iFOBT in public health clinics.

    Matched MeSH terms: Time Factors
  19. Said MA, Musarudin M, Zulkaffli NF
    Ann Nucl Med, 2020 Dec;34(12):884-891.
    PMID: 33141408 DOI: 10.1007/s12149-020-01543-x
    OBJECTIVE: 18F is the most extensively used radioisotope in current clinical practices of PET imaging. This selection is based on the several criteria of pure PET radioisotopes with an optimum half-life, and low positron energy that contributes to a smaller positron range. In addition to 18F, other radioisotopes such as 68Ga and 124I are currently gained much attention with the increase in interest in new PET tracers entering the clinical trials. This study aims to determine the minimal scan time per bed position (Tmin) for the 124I and 68Ga based on the quantitative differences in PET imaging of 68Ga and 124I relative to 18F.

    METHODS: The European Association of Nuclear Medicine (EANM) procedure guidelines version 2.0 for FDG-PET tumor imaging has adhered for this purpose. A NEMA2012/IEC2008 phantom was filled with tumor to background ratio of 10:1 with the activity concentration of 30 kBq/ml ± 10 and 3 kBq/ml ± 10% for each radioisotope. The phantom was scanned using different acquisition times per bed position (1, 5, 7, 10 and 15 min) to determine the Tmin. The definition of Tmin was performed using an image coefficient of variations (COV) of 15%.

    RESULTS: Tmin obtained for 18F, 68Ga and 124I were 3.08, 3.24 and 32.93 min, respectively. Quantitative analyses among 18F, 68Ga and 124I images were performed. Signal-to-noise ratio (SNR), contrast recovery coefficients (CRC), and visibility (VH) are the image quality parameters analysed in this study. Generally, 68Ga and 18F gave better image quality as compared to 124I for all the parameters studied.

    CONCLUSION: We have defined Tmin for 18F, 68Ga and 124I SPECT CT imaging based on NEMA2012/IEC2008 phantom imaging. Despite the long scanning time suggested by Tmin, improvement in the image quality is acquired especially for 124I. In clinical practice, the long acquisition time, nevertheless, may cause patient discomfort and motion artifact.

    Matched MeSH terms: Time Factors
  20. Ghoreishi A, Arsang-Jang S, Sabaa-Ayoun Z, Yassi N, Sylaja PN, Akbari Y, et al.
    J Stroke Cerebrovasc Dis, 2020 Dec;29(12):105321.
    PMID: 33069086 DOI: 10.1016/j.jstrokecerebrovasdis.2020.105321
    BACKGROUND: The emergence of the COVID-19 pandemic has significantly impacted global healthcare systems and this may affect stroke care and outcomes. This study examines the changes in stroke epidemiology and care during the COVID-19 pandemic in Zanjan Province, Iran.

    METHODS: This study is part of the CASCADE international initiative. From February 18, 2019, to July 18, 2020, we followed ischemic and hemorrhagic stroke hospitalization rates and outcomes in Valiasr Hospital, Zanjan, Iran. We used a Bayesian hierarchical model and an interrupted time series analysis (ITS) to identify changes in stroke hospitalization rate, baseline stroke severity [measured by the National Institutes of Health Stroke Scale (NIHSS)], disability [measured by the modified Rankin Scale (mRS)], presentation time (last seen normal to hospital presentation), thrombolytic therapy rate, median door-to-needle time, length of hospital stay, and in-hospital mortality. We compared in-hospital mortality between study periods using Cox-regression model.

    RESULTS: During the study period, 1,026 stroke patients were hospitalized. Stroke hospitalization rates per 100,000 population decreased from 68.09 before the pandemic to 44.50 during the pandemic, with a significant decline in both Bayesian [Beta: -1.034; Standard Error (SE): 0.22, 95% CrI: -1.48, -0.59] and ITS analysis (estimate: -1.03, SE = 0.24, p time and door-to-needle time did not change during the pandemic, a lower proportion of patients received thrombolysis (-10.1%; p = 0.004). We did not see significant changes in admission rate to the stroke unit and in-hospital mortality rate; however, disability at discharge increased (p 

    Matched MeSH terms: Time Factors
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links