Electronic health records (EHRs) are an increasingly important source of information for healthcare professionals and researchers. However, EHRs are often fragmented, unstructured, and difficult to analyze due to the heterogeneity of the data sources and the sheer volume of information. Knowledge graphs have emerged as a powerful tool for capturing and representing complex relationships within large datasets. In this study, we explore the use of knowledge graphs to capture and represent complex relationships within EHRs. Specifically, we address the following research question: Can a knowledge graph created using the MIMIC III dataset and GraphDB effectively capture semantic relationships within EHRs and enable more efficient and accurate data analysis? We map the MIMIC III dataset to an ontology using text refinement and Protege; then, we create a knowledge graph using GraphDB and use SPARQL queries to retrieve and analyze information from the graph. Our results demonstrate that knowledge graphs can effectively capture semantic relationships within EHRs, enabling more efficient and accurate data analysis. We provide examples of how our implementation can be used to analyze patient outcomes and identify potential risk factors. Our results demonstrate that knowledge graphs are an effective tool for capturing semantic relationships within EHRs, enabling a more efficient and accurate data analysis. Our implementation provides valuable insights into patient outcomes and potential risk factors, contributing to the growing body of literature on the use of knowledge graphs in healthcare. In particular, our study highlights the potential of knowledge graphs to support decision-making and improve patient outcomes by enabling a more comprehensive and holistic analysis of EHR data. Overall, our research contributes to a better understanding of the value of knowledge graphs in healthcare and lays the foundation for further research in this area.
Cell counting in fluorescence microscopy is an essential task in biomedical research for analyzing cellular dynamics and studying disease progression. Traditional methods for cell counting involve manual counting or threshold-based segmentation, which are time-consuming and prone to human error. Recently, deep learning-based object detection methods have shown promising results in automating cell counting tasks. However, the existing methods mainly focus on segmentation-based techniques that require a large amount of labeled data and extensive computational resources. In this paper, we propose a novel approach to detect and count multiple-size cells in a fluorescence image slide using You Only Look Once version 5 (YOLOv5) with a feature pyramid network (FPN). Our proposed method can efficiently detect multiple cells with different sizes in a single image, eliminating the need for pixel-level segmentation. We show that our method outperforms state-of-the-art segmentation-based approaches in terms of accuracy and computational efficiency. The experimental results on publicly available datasets demonstrate that our proposed approach achieves an average precision of 0.8 and a processing time of 43.9 ms per image. Our approach addresses the research gap in the literature by providing a more efficient and accurate method for cell counting in fluorescence microscopy that requires less computational resources and labeled data.
Atrial fibrillation is a prevalent cardiac arrhythmia that poses significant health risks to patients. The use of non-invasive methods for AF detection, such as Electrocardiogram and Photoplethysmogram, has gained attention due to their accessibility and ease of use. However, there are challenges associated with ECG-based AF detection, and the significance of PPG signals in this context has been increasingly recognized. The limitations of ECG and the untapped potential of PPG are taken into account as this work attempts to classify AF and non-AF using PPG time series data and deep learning. In this work, we emploted a hybrid deep neural network comprising of 1D CNN and BiLSTM for the task of AF classification. We addressed the under-researched area of applying deep learning methods to transmissive PPG signals by proposing a novel approach. Our approach involved integrating ECG and PPG signals as multi-featured time series data and training deep learning models for AF classification. Our hybrid 1D CNN and BiLSTM model achieved an accuracy of 95% on test data in identifying atrial fibrillation, showcasing its strong performance and reliable predictive capabilities. Furthermore, we evaluated the performance of our model using additional metrics. The precision of our classification model was measured at 0.88, indicating its ability to accurately identify true positive cases of AF. The recall, or sensitivity, was measured at 0.85, illustrating the model's capacity to detect a high proportion of actual AF cases. Additionally, the F1 score, which combines both precision and recall, was calculated at 0.84, highlighting the overall effectiveness of our model in classifying AF and non-AF cases.