METHODS: This paper presents two hybrid methodologies that combines optimal control theory with multi-objective swarm and evolutionary algorithms and compares the performance of these methodologies with multi-objective swarm intelligence algorithms such as MOEAD, MODE, MOPSO and M-MOPSO. The hybrid and conventional methodologies are compared by addressing CMOOP.
RESULTS: The minimized tumor and drug concentration results obtained by the hybrid methodologies demonstrate that they are not only superior to pure swarm intelligence or evolutionary algorithm methodologies but also consumes far less computational time. Further, Second Order Sufficient Condition (SSC) is also used to verify and validate the optimality condition of the constrained multi-objective problem.
CONCLUSION: The proposed methodologies reduce chemo-medicine administration while maintaining effective tumor killing. This will be helpful for oncologist to discover and find the optimum dose schedule of the chemotherapy that reduces the tumor cells while maintaining the patients' health at a safe level.
METHOD: The model was formulated by integrating the Caputo fractional derivative with the previous cancer treatment model. Thereafter, the linear-quadratic with the repopulation model was coupled into the model to account for the cells' population decay due to radiation. The treatment process was then simulated with numerical variables, numerical parameters, and radiation parameters. The numerical parameters which included the proliferation coefficients of the cells, competition coefficients of the cells, and the perturbation constant of the normal cells were obtained from previous literature. The radiation and numerical parameters were obtained from reported clinical data of six patients treated with radiotherapy. The patients had tumor volumes of 24.1cm3, 17.4cm3, 28.4cm3, 18.8cm3, 30.6cm3, and 12.6cm3 with fractionated doses of 2 Gy for the first two patients and 1.8 Gy for the other four. The initial tumor volumes were used to obtain initial populations of cells after which the treatment process was simulated in MATLAB. Subsequently, a global sensitivity analysis was done to corroborate the model with clinical data. Finally, 96 radiation protocols were simulated by using the biologically effective dose formula. These protocols were used to obtain a regression equation connecting the value of the Caputo fractional derivative with the fractionated dose.
RESULTS: The final tumor volumes, from the results of the simulations, were 3.58cm3, 8.61cm3, 5.68cm3, 4.36cm3, 5.75cm3, and 6.12cm3, while those of the normal cells were 23.87cm3, 17.29cm3, 28.17cm3, 18.68cm3, 30.33cm3, and 12.55cm3. The sensitivity analysis showed that the most sensitive model factors were the value of the Caputo fractional derivative and the proliferation coefficient of the cancer cells. Lastly, the obtained regression equation accounted for 99.14% of the prediction.
CONCLUSION: The model can simulate a cancer treatment process and predict the results of other radiation protocols.
METHODS: In this paper, we propose a novel approach to distinguish colonic polyps by integrating several techniques, including a modified deep residual network, principal component analysis and AdaBoost ensemble learning. A powerful deep residual network architecture, ResNet-50, was investigated to reduce the computational time by altering its architecture. To keep the interference to a minimum, median filter, image thresholding, contrast enhancement, and normalisation techniques were exploited on the endoscopic images to train the classification model. Three publicly available datasets, i.e., Kvasir, ETIS-LaribPolypDB, and CVC-ClinicDB, were merged to train the model, which included images with and without polyps.
RESULTS: The proposed approach trained with a combination of three datasets achieved Matthews Correlation Coefficient (MCC) of 0.9819 with accuracy, sensitivity, precision, and specificity of 99.10%, 98.82%, 99.37%, and 99.38%, respectively.
CONCLUSIONS: These results show that our method could repeatedly classify endoscopic images automatically and could be used to effectively develop computer-aided diagnostic tools for early CRC detection.
METHODS: Continuous raw PPG waveforms were blindly allocated into segments with an equal length (5s) without leveraging any pulse location information and were normalized with Z-score normalization methods. A 1-D-CNN was designed to automatically learn the intrinsic features of the PPG waveform, and perform the required classification. Several training hyperparameters (initial learning rate and gradient threshold) were varied to investigate the effect of these parameters on the performance of the network. Subsequently, this proposed network was trained and validated with 30 subjects, and then tested with eight subjects, with our local dataset. Moreover, two independent datasets downloaded from the PhysioNet MIMIC II database were used to evaluate the robustness of the proposed network.
RESULTS: A 13 layer 1-D-CNN model was designed. Within our local study dataset evaluation, the proposed network achieved a testing accuracy of 94.9%. The classification accuracy of two independent datasets also achieved satisfactory accuracy of 93.8% and 86.7% respectively. Our model achieved a comparable performance with most reported works, with the potential to show good generalization as the proposed network was evaluated with multiple cohorts (overall accuracy of 94.5%).
CONCLUSION: This paper demonstrated the feasibility and effectiveness of applying blind signal processing and deep learning techniques to PPG motion artifact detection, whereby manual feature thresholding was avoided and yet a high generalization ability was achieved.
METHODS: In this paper, we analyze four wide-spread deep learning models designed for the segmentation of three retinal fluids outputting dense predictions in the RETOUCH challenge data. We aim to demonstrate how a patch-based approach could push the performance for each method. Besides, we also evaluate the methods using the OPTIMA challenge dataset for generalizing network performance. The analysis is driven into two sections: the comparison between the four approaches and the significance of patching the images.
RESULTS: The performance of networks trained on the RETOUCH dataset is higher than human performance. The analysis further generalized the performance of the best network obtained by fine-tuning it and achieved a mean Dice similarity coefficient (DSC) of 0.85. Out of the three types of fluids, intraretinal fluid (IRF) is more recognized, and the highest DSC value of 0.922 is achieved using Spectralis dataset. Additionally, the highest average DSC score is 0.84, which is achieved by PaDeeplabv3+ model using Cirrus dataset.
CONCLUSIONS: The proposed method segments the three fluids in the retina with high DSC value. Fine-tuning the networks trained on the RETOUCH dataset makes the network perform better and faster than training from scratch. Enriching the networks with inputting a variety of shapes by extracting patches helped to segment the fluids better than using a full image.
METHODS: This review article discusses the experimental and computational methods in the study of HUA. The discussion includes computational fluid dynamics approach and steps involved in the modeling used to investigate the flow characteristics of HUA. From inception to May 2020, databases of PubMed, Embase, Scopus, the Cochrane Library, BioMed Central, and Web of Science have been utilized to conduct a thorough investigation of the literature. There had been no language restrictions in publication and study design of the database searches. A total of 117 articles relevant to the topic under investigation were thoroughly and critically reviewed to give a clear information about the subject. The article summarizes the review in the form of method of studying the HUA, CFD approach in HUA, and the application of CFD for predicting HUA obstacle, including the type of CFD commercial software are used in this research area.
RESULTS: This review found that the human upper airway was well studied through the application of computational fluid dynamics, which had considerably enhanced the understanding of flow in HUA. In addition, it assisted in making strategic and reasonable decision regarding the adoption of treatment methods in clinical settings. The literature suggests that most studies were related to HUA simulation that considerably focused on the aspects of fluid dynamics. However, there is a literature gap in obtaining information on the effects of fluid-structure interaction (FSI). The application of FSI in HUA is still limited in the literature; as such, this could be a potential area for future researchers. Furthermore, majority of researchers present the findings of their work through the mechanism of airflow, such as that of velocity, pressure, and shear stress. This includes the use of Navier-Stokes equation via CFD to help visualize the actual mechanism of the airflow. The above-mentioned technique expresses the turbulent kinetic energy (TKE) in its result to demonstrate the real mechanism of the airflow. Apart from that, key result such as wall shear stress (WSS) can be revealed via turbulent kinetic energy (TKE) and turbulent energy dissipation (TED), where it can be suggestive of wall injury and collapsibility tissue to the HUA.
METHODS: The Casson fluid was used to model the blood that flows under the influences of uniformly distributed magnetic field and oscillating pressure gradient. The governing fractional differential equations were expressed using the Caputo Fabrizio fractional derivative without singular kernel.
RESULTS: The analytical solutions of velocities for non-Newtonian model were then calculated by means of Laplace and finite Hankel transforms. These velocities were then presented graphically. The result shows that the velocity increases with respect to Reynolds number and Casson parameter, while decreases when Hartmann number increases.
CONCLUSIONS: Casson blood was treated as the non-Newtonian fluid. The MHD blood flow was accelerated by pressure gradient. These findings are beneficial for studying atherosclerosis therapy, the diagnosis and therapeutic treatment of some medical problems.
METHODS: A total of 1447 ultrasound images, including 767 benign masses and 680 malignant masses were acquired from a tertiary hospital. A semi-supervised GAN model was developed to augment the breast ultrasound images. The synthesized images were subsequently used to classify breast masses using a convolutional neural network (CNN). The model was validated using a 5-fold cross-validation method.
RESULTS: The proposed GAN architecture generated high-quality breast ultrasound images, verified by two experienced radiologists. The improved performance of semi-supervised learning increased the quality of the synthetic data produced in comparison to the baseline method. We achieved more accurate breast mass classification results (accuracy 90.41%, sensitivity 87.94%, specificity 85.86%) with our synthetic data augmentation compared to other state-of-the-art methods.
CONCLUSION: The proposed radiomics model has demonstrated a promising potential to synthesize and classify breast masses on ultrasound in a semi-supervised manner.
METHODS: Eight scientific databases are selected as an appropriate database and Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) method was employed as the basis method for conducting this systematic and meta-analysis review. Regarding the main objective of this research, some inclusion and exclusion criteria were considered to limit our investigation. To achieve a structured meta-analysis, all eligible articles were classified based on authors, publication year, journals or conferences, applied fuzzy methods, main objectives of the research, problems and research gaps, tools utilized to model the fuzzy system, medical disciplines, sample sizes, the inputs and outputs of the system, findings, results and finally the impact of applied fuzzy methods to improve diagnosis. Then, we analyzed the results obtained from these classifications to indicate the effect of fuzzy methods in decreasing the complexity of diagnosis.
RESULTS: Consequently, the result of this study approved the effectiveness of applying different fuzzy methods in diseases diagnosis process, presenting new insights for researchers about what kind of diseases which have been more focused. This will help to determine the diagnostic aspects of medical disciplines that are being neglected.
CONCLUSIONS: Overall, this systematic review provides an appropriate platform for further research by identifying the research needs in the domain of disease diagnosis.
METHODS: An initial bibliometric analysis shows that the reviewed papers focused on Electromyogram(EMG), Electroencephalogram(EEG), Electrocardiogram(ECG), and Electrooculogram(EOG). These four categories were used to structure the subsequent content review.
RESULTS: During the content review, we understood that deep learning performs better for big and varied datasets than classic analysis and machine classification methods. Deep learning algorithms try to develop the model by using all the available input.
CONCLUSIONS: This review paper depicts the application of various deep learning algorithms used till recently, but in future it will be used for more healthcare areas to improve the quality of diagnosis.
METHODS: Retrospective data from 210 patients were obtained from a general hospital in Malaysia from May 2014 until June 2015, where 123 patients were having comorbid diabetes mellitus. The comparison of blood glucose control protocol performance between both protocol simulations was conducted through blood glucose fitted with physiological modelling on top of virtual trial simulations, mean calculation of simulation error and several graphical comparisons using stochastic modelling.
RESULTS: Stochastic Targeted Blood Glucose Control Protocol reduces hyperglycaemia by 16% in diabetic and 9% in nondiabetic cohorts. The protocol helps to control blood glucose level in the targeted range of 4.0-10.0 mmol/L for 71.8% in diabetic and 82.7% in nondiabetic cohorts, besides minimising the treatment hour up to 71 h for 123 diabetic patients and 39 h for 87 nondiabetic patients.
CONCLUSION: It is concluded that Stochastic Targeted Blood Glucose Control Protocol is good in reducing hyperglycaemia as compared to the current blood glucose management protocol in the Malaysian intensive care unit. Hence, the current Malaysian intensive care unit protocols need to be modified to enhance their performance, especially in the integration of insulin and nutrition intervention in decreasing the hyperglycaemia incidences. Improvement in Stochastic Targeted Blood Glucose Control Protocol in terms of uen model is also a must to adapt with the diabetic cohort.