OBJECTIVE: The aim was to present a model of CT-MRI registration used to diagnose liver cancer, specifically for improving the quality of the liver images and provide all the required information for earlier detection of the tumors. This method should concurrently address the issues of imaging procedures for liver cancer to fasten the detection of the tumor from both modalities.
METHODS: In this work, a registration scheme for fusing the CT and MRI liver images is studied. A feature point-based method with normalized cross-correlation has been utilized to aid in the diagnosis of liver cancer and provide multimodal information to physicians. Data on ten patients from an online database were obtained. For each dataset, three planar views from both modalities were interpolated and registered using feature point-based methods. The registration of algorithms was carried out by MATLAB (vR2019b, Mathworks, Natick, USA) on an Intel (R) Core (TM) i5-5200U CPU @ 2.20 GHz computer. The accuracy of the registered image is being validated qualitatively and quantitatively.
RESULTS: The results show that an accurate registration is obtained with minimal distance errors by which CT and MRI were accurately registered based on the validation of the experts. The RMSE ranges from 0.02 to 1.01 for translation, which is equivalent in magnitude to approximately 0 to 5 pixels for CT and registered image resolution.
CONCLUSION: The CT-MRI registration scheme can provide complementary information on liver cancer to physicians, thus improving the diagnosis and treatment planning process.
INTRODUCTION: Magnetic resonance imaging is a useful technique to visualize soft tissues within the knee joint. Cartilage delineation in magnetic resonance (MR) images helps in understanding the disease progressions. Convolutional neural networks (CNNs) have shown promising results in computer vision tasks, and various encoder-decoder-based segmentation neural networks are introduced in the last few years. However, the performances of such networks are unknown in the context of cartilage delineation.
METHODS: This study trained and compared 10 encoder-decoder-based CNNs in performing cartilage delineation from knee MR images. The knee MR images are obtained from the Osteoarthritis Initiative (OAI). The benchmarking process is to compare various CNNs based on physical specifications and segmentation performances.
RESULTS: LadderNet has the least trainable parameters with the model size of 5 MB. UNetVanilla crowned the best performances by having 0.8369, 0.9108, and 0.9097 on JSC, DSC, and MCC.
CONCLUSION: UNetVanilla can be served as a benchmark for cartilage delineation in knee MR images, while LadderNet served as an alternative if there are hardware limitations during production.
MATERIALS AND METHODS: We propose a mixed-method study of mental health assessment that combines psychological questionnaires with facial emotion analysis to comprehensively evaluate the mental health of students on a large scale. The Depression Anxiety and Stress Scale-21(DASS-21) is used for the psychological questionnaire. The facial emotion recognition model is implemented by transfer learning based on neural networks, and the model is pre-trained using FER2013 and CFEE datasets. Among them, the FER2013 dataset consists of 48 × 48-pixel face gray images, a total of 35,887 face images. The CFEE dataset contains 950,000 facial images with annotated action units (au). Using a random sampling strategy, we sent online questionnaires to 400 college students and received 374 responses, and the response rate was 93.5%. After pre-processing, 350 results were available, including 187 male and 153 female students. First, the facial emotion data of students were collected in an online questionnaire test. Then, a pre-trained model was used for emotion recognition. Finally, the online psychological questionnaire scores and the facial emotion recognition model scores were collated to give a comprehensive psychological evaluation score.
RESULTS: The experimental results of the facial emotion recognition model proposed to show that its classification results are broadly consistent with the mental health survey results. This model can be used to improve efficiency. In particular, the accuracy of the facial emotion recognition model proposed in this paper is higher than that of the general mental health model, which only uses the traditional single questionnaire. Furthermore, the absolute errors of this study in the three symptoms of depression, anxiety, and stress are lower than other mental health survey results and are only 0.8%, 8.1%, 3.5%, and 1.8%, respectively.
CONCLUSION: The mixed method combining intelligent methods and scales for mental health assessment has high recognition accuracy. Therefore, it can support efficient large-scale screening of students' psychological problems.
AIMS: A variation of anisotropic diffusion is proposed that can reduce speckle noise without compromising the image quality of the edges and other important details.
METHODS: For this technique, four gradient thresholds were adopted instead of one. A new diffusivity function that preserves the edge of the resultant image is also proposed. To automatically terminate the iterative procedures, the Mean Absolute Error as its stopping criterion was implemented.
RESULTS: Numerical results obtained by simulations unanimously indicate that the proposed method outperforms conventional speckle reduction techniques. Nevertheless, this preliminary study has been conducted based on a small number of asymptomatic subjects.
CONCLUSION: Future work must investigate the feasibility of this method in a large cohort and its clinical validity through testing subjects with a symptomatic cartilage injury.
METHODS: Two detection architectures, Single Shot Multibox Detector (SSD) and Faster Regional based Convolutional Neural Network (R-CNN) with various feature extractors were trained on echocardiography images from 33 patients. Thereafter, the models were tested on 10 echocardiography videos.
RESULTS: Faster R-CNN Inception v2 had shown the highest accuracy (98.6%) followed closely by SSD Mobilenet v2. In terms of speed, SSD Mobilenet v2 resulted in a loss of 46.81% in framesper- second (fps) during real-time detection but managed to perform better than the other neural network models. Additionally, SSD Mobilenet v2 used the least amount of Graphic Processing Unit (GPU) but the Central Processing Unit (CPU) usage was relatively similar throughout all models.
CONCLUSION: Our findings provide a foundation for implementing a convolutional detection system to echocardiography for medical purposes.