Mobile robots often have to discover a path of collision-free towards a specific goal point in their environment. We are trying to resolve the mobile robot problem iteratively by means of numerical technique. It is built on a method of potential field that count on the use of Laplace’s equation in the mobile robot’s configuration space to constrain/which reduces the generation of a potential function over regions. This paper proposed an iterative approach in solving robot path finding problem known as Accelerated Over-Relaxation (AOR). The experiment shows that these suggested approach can establish a smooth path between the starting and goal points by engaging with a finite-difference technique. The simulation results also show that a more rapidly solution with smoother path than the previous work is achieved via this numerical approach.
Active appearance model (AAM) is one of the most popular model-based approaches that have been extensively used to extract features by highly accurate modeling of human faces under various physical and environmental circumstances. However, in such active appearance model, fitting the model with original image is a challenging task. State of the art shows that optimization method is applicable to resolve this problem. However, another common problem is applying optimization. Hence, in this paper we propose an AAM based face recognition technique, which is capable of resolving the fitting problem of AAM by introducing a new adaptive ABC algorithm. The adaptation increases the efficiency of fitting as against the conventional ABC algorithm. We have used three datasets: CASIA dataset, property 2.5D face dataset, and UBIRIS v1 images dataset in our experiments. The results have revealed that the proposed face recognition technique has performed effectively, in terms of accuracy of face recognition.
Blood leucocytes segmentation in medical images is viewed as difficult process due to the variability of blood cells concerning their shape and size and the difficulty towards determining location of Blood Leucocytes. Physical analysis of blood tests to recognize leukocytes is tedious, time-consuming and liable to error because of the various morphological components of the cells. Segmentation of medical imagery has been considered as a difficult task because of complexity of images, and also due to the non-availability of leucocytes models which entirely captures the probable shapes in each structures and also incorporate cell overlapping, the expansive variety of the blood cells concerning their shape and size, various elements influencing the outer appearance of the blood leucocytes, and low Static Microscope Image disparity from extra issues outcoming about because of noise. We suggest a strategy towards segmentation of blood leucocytes using static microscope images which is a resultant of three prevailing systems of computer vision fiction: enhancing the image, Support vector machine for segmenting the image, and filtering out non ROI (region of interest) on the basis of Local binary patterns and texture features. Every one of these strategies are modified for blood leucocytes division issue, in this manner the subsequent techniques are very vigorous when compared with its individual segments. Eventually, we assess framework based by compare the outcome and manual division. The findings outcome from this study have shown a new approach that automatically segments the blood leucocytes and identify it from a static microscope images. Initially, the method uses a trainable segmentation procedure and trained support vector machine classifier to accurately identify the position of the ROI. After that, filtering out non ROI have proposed based on histogram analysis to avoid the non ROI and chose the right object. Finally, identify the blood leucocytes type using the texture feature. The performance of the foreseen approach has been tried in appearing differently in relation to the system against manual examination by a gynaecologist utilizing diverse scales. A total of 100 microscope images were used for the comparison, and the results showed that the proposed solution is a viable alternative to the manual segmentation method for accurately determining the ROI. We have evaluated the blood leucocytes identification using the ROI texture (LBP Feature). The identification accuracy in the technique used is about 95.3%., with 100 sensitivity and 91.66% specificity.
Tacit knowledge of health-care experts is an important source of experiential know-how, yet due to various operational and technical reasons, such health-care knowledge is not entirely harnessed and put into professional practice. Emerging knowledge-management (KM) solutions suggest strategies to acquire the seemingly intractable and nonarticulated tacit knowledge of health-care experts. This paper presents a KM methodology, together with its computational implementation, to 1) acquire the tacit knowledge possessed by health-care experts; 2) represent the acquired tacit health-care knowledge in a computational formalism--i.e., clinical scenarios--that allows the reuse of stored knowledge to acquire tacit knowledge; and 3) crystallize the acquired tacit knowledge so that it is validated for health-care decision-support and medical education systems.
Worldwide healthcare delivery trends are undergoing a subtle paradigm shift--patient centered services as opposed to provider centered services and wellness maintenance as opposed to illness management. In this paper we present a Tele-Healthcare project TIDE--Tele-Healthcare Information and Diagnostic Environment. TIDE manifests an 'intelligent' healthcare environment that aims to ensure lifelong coverage of person-specific health maintenance decision-support services--i.e., both wellness maintenance and illness management services--ubiquitously available via the Internet/WWW. Taking on an all-encompassing health maintenance role--spanning from wellness to illness issues--the functionality of TIDE involves the generation and delivery of (a) Personalized, Pro-active, Persistent, Perpetual, and Present wellness maintenance services, and (b) remote diagnostic services for managing noncritical illnesses. Technically, TIDE is an amalgamation of diverse computer technologies--Artificial Intelligence, Internet, Multimedia, Databases, and Medical Informatics--to implement a sophisticated healthcare delivery infostructure.
Electronic patient records (EPR) can be regarded as an implicit source of clinical behaviour and problem-solving knowledge, systematically compiled by clinicians. We present an approach, together with its computational implementation, to pro-actively transform XML-based EPR into specialised Clinical Cases (CC) in the realm of Medical Case Base Systems. The 'correct' transformation of EPR to CC involves structural, terminological and conceptual standardisation, which is achieved by a confluence of techniques and resources, such as XML, UMLS (meta-thesaurus) and medical knowledge ontologies. We present below the functional architecture of a Medical Case-Base Reasoning Info-Structure (MCRIS) that features two distinct, yet related, functionalities: (1) a generic medical case-based reasoning system for decision-support activities; and (2) an EPR-CC transformation system to transform typical EPR's to CC.
The 21st century promises to usher in an era of Internet based healthcare services--Tele-Healthcare. Such services augur well with the on-going paradigm shift in healthcare delivery patterns, i.e. patient centred services as opposed to provider centred services and wellness maintenance as opposed to illness management. This paper presents a Tele-Healthcare info-structure TIDE--an 'intelligent' wellness-oriented healthcare delivery environment. TIDE incorporates two WWW-based healthcare systems: (1) AIMS (Automated Health Monitoring System) for wellness maintenance and (2) IDEAS (Illness Diagnostic & Advisory System) for illness management. Our proposal comes from an attempt to rethink the sources of possible leverage in improving healthcare; vis-à-vis the provision of a continuum of personalised home-based healthcare services that emphasise the role of the individual in self health maintenance.
Presently, there is a growing demand from the healthcare community to leverage upon and transform the vast quantities of healthcare data into value-added, 'decision-quality' knowledge, vis-à-vis, strategic knowledge services oriented towards healthcare management and planning. To meet this end, we present a Strategic Knowledge Services Info-structure that leverages on existing healthcare knowledge/data bases to derive decision-quality knowledge-knowledge that is extracted from healthcare data through services akin to knowledge discovery in databases and data mining.
Tuberculosis (TB) remains one of the most devastating infectious diseases and its treatment efficiency is majorly influenced by the stage at which infection with the TB bacterium is diagnosed. The available methods for TB diagnosis are either time consuming, costly or not efficient. This study employs a signal generation mechanism for biosensing, known as Plasmonic ELISA, and computational intelligence to facilitate automatic diagnosis of TB. Plasmonic ELISA enables the detection of a few molecules of analyte by the incorporation of smart nanomaterials for better sensitivity of the developed detection system. The computational system uses k-means clustering and thresholding for image segmentation. This paper presents the results of the classification performance of the Plasmonic ELISA imaging data by using various types of classifiers. The five-fold cross-validation results show high accuracy rate (>97%) in classifying TB images using the entire data set. Future work will focus on developing an intelligent mobile-enabled expert system to diagnose TB in real-time. The intelligent system will be clinically validated and tested in collaboration with healthcare providers in Malaysia.
Leachate is one of the main surface water pollution sources in Selangor State (SS), Malaysia. The prediction of leachate amounts is elementary in sustainable waste management and leachate treatment processes, before discharging to surrounding environment. In developing countries, the accurate evaluation of leachate generation rates has often considered a challenge due to the lack of reliable data and high measurement costs. Leachate generation is related to several factors, including meteorological data, waste generation rates, and landfill design conditions. The high variations in these factors lead to complicating leachate modeling processes. This study aims at identifying the key elements contributing to leachate production and developing various AI-based models to predict leachate generation rates. These models included Artificial Neural Network (ANN)-Multi-linear perceptron (MLP) with single and double hidden layers, and support vector machine (SVM) regression time series algorithms. Various performance measures were applied to evaluate the developed model's accuracy. In this study, input optimization process showed that three inputs were acceptable for modeling the leachate generation rates, namely dumped waste quantity, rainfall level, and emanated gases. The initial performance analysis showed that ANN-MLP2 model-which applies two hidden layers-achieved the best performance, then followed by ANN-MLP1 model-which applies one hidden layer and three inputs-while SVM model gave the lowest performance. Ranges and frequency of relative error (RE%) also demonstrate that ANN-MLP models outperformed SVM models. Furthermore, low and peak flow criterion (LFC and PFC) assessment of leachate inflow values in ANN-MLP model with two hidden layers made more accurate values than other models. Since minimizing data collection and processing efforts as well as minimizing modeling complexity are critical in the hydrological modeling process, the applied input optimization process and the developed models in this study were able to provide a good performance in the modeling of leachate generation efficiently.
A quantitative structure-human intestinal absorption relationship was developed using artificial neural network (ANN) modeling. A set of 86 drug compounds and their experimentally-derived intestinal absorption values used in this study was gathered from the literature and a total of 57 global molecular descriptors, including constitutional, topological, chemical, geometrical and quantum chemical descriptors, calculated for each compound. A supervised network with radial basis transfer function was used to correlate calculated molecular descriptors with experimentally-derived measures of human intestinal absorption. A genetic algorithm was then used to select important molecular descriptors. Intestinal absorption values (IA%) were used as the ANN's output and calculated molecular descriptors as the inputs. The best genetic neural network (GNN) model with 15 input descriptors was chosen, and the significance of the selected descriptors for intestinal absorption examined. Results obtained with the model that was developed indicate that lipophilicity, conformational stability and inter-molecular interactions (polarity, and hydrogen bonding) have the largest impact on intestinal absorption.
This paper explores the migration of adaptive filtering with swarm intelligence/evolutionary techniques employed in the field of electroencephalogram/event-related potential noise cancellation or extraction. A new approach is proposed in the form of controlled search space to stabilize the randomness of swarm intelligence techniques especially for the EEG signal. Swarm-based algorithms such as Particles Swarm Optimization, Artificial Bee Colony, and Cuckoo Optimization Algorithm with their variants are implemented to design optimized adaptive noise canceler. The proposed controlled search space technique is tested on each of the swarm intelligence techniques and is found to be more accurate and powerful. Adaptive noise canceler with traditional algorithms such as least-mean-square, normalized least-mean-square, and recursive least-mean-square algorithms are also implemented to compare the results. ERP signals such as simulated visual evoked potential, real visual evoked potential, and real sensorimotor evoked potential are used, due to their physiological importance in various EEG studies. Average computational time and shape measures of evolutionary techniques are observed 8.21E-01 sec and 1.73E-01, respectively. Though, traditional algorithms take negligible time consumption, but are unable to offer good shape preservation of ERP, noticed as average computational time and shape measure difference, 1.41E-02 sec and 2.60E+00, respectively.
Deploying large numbers of mobile robots which can interact with each other produces swarm intelligent behavior. However, mobile robots are normally running with finite energy resource, supplied from finite battery. The limitation of energy resource required human intervention for recharging the batteries. The sharing information among the mobile robots would be one of the potentials to overcome the limitation on previously recharging system. A new approach is proposed based on integrated intelligent system inspired by foraging of honeybees applied to multimobile robot scenario. This integrated approach caters for both working and foraging stages for known/unknown power station locations. Swarm mobile robot inspired by honeybee is simulated to explore and identify the power station for battery recharging. The mobile robots will share the location information of the power station with each other. The result showed that mobile robots consume less energy and less time when they are cooperating with each other for foraging process. The optimizing of foraging behavior would result in the mobile robots spending more time to do real work.
This article describes the dataset for the elucidation of the possible mechanisms of antidiarrhoeal actions of methanol leaves extract of Combretum hypopilinum (Diels) Combretaceae in mice. The plant has been used in traditional medicine to treat diarrhoea in Nigeria and other African countries. We introduce the data for the antidiarrhoeal activity of the methanol leaf extract of Combretum hypopilinum at 1,000 mg/kg investigated using charcoal meal test in mice with loperamide (5 mg/kg) as the standard antidiarrhoeal agent. To elucidate the possible mechanisms of its antidiarrhoeal action, naloxone (2 mg/kg), prazosin (1 mg/kg), yohimbine (2 mg/kg), propranolol (1 mg/kg), pilocarpine (1 mg/kg) and isosorbide dinitrate (150 mg/kg) were separately administered to different groups of mice 30 minutes before administration of the extract. Each mouse was dissected using dissecting set, and the small intestine was immediately removed from pylorus to caecum, placed lengthwise on moist filter paper and measured the distance travelled by charcoal relative to the length of the intestine using a calibrated ruler in centimetre. Besides, the peristaltic index and inhibition of charcoal movement of each animal were calculated and recorded. The methods for the data collection is similar to the one used to investigate the possible pathways involved in the antidiarrhoeal action of Combretum hypopilinum in mice in the research article by Ahmad et al. (2020) "Mechanisms of Antidiarrhoeal Activity of Methanol Leaf Extract of Combretum hypopilinum Diels (Combretaceae): Involvement of Opioidergic and (α1 and β)-Adrenergic Pathways" (https://doi.org/10.1016/j.jep.2020.113750) . Therefore, this datasets could form a basis for in-depth research to elucidate further the pharmacological properties of the plant Combretum hypopilinum and its bioactive compounds to develop standardized herbal product and novel compound for management of diarrhoea. It could also be instrumental for evaluating the plant's pharmacological potentials using other computational-based and artificial intelligence approaches, including predictive modelling and simulation.
We present an efficient method for the fusion of medical captured images using different modalities that enhances the original images and combines the complementary information of the various modalities. The contourlet transform has mainly been employed as a fusion technique for images obtained from equal or different modalities. The limitation of directional information of dual-tree complex wavelet (DT-CWT) is rectified in dual-tree complex contourlet transform (DT-CCT) by incorporating directional filter banks (DFB) into the DT-CWT. The DT-CCT produces images with improved contours and textures, while the property of shift invariance is retained. To improve the fused image quality, we propose a new method for fusion rules based on principle component analysis (PCA) which depend on frequency component of DT-CCT coefficients (contourlet domain). For low frequency components, PCA method is adopted and for high frequency components, the salient features are picked up based on local energy. The final fusion image is obtained by directly applying inverse dual tree complex contourlet transform (IDT-CCT) to the fused low and high frequency components. The experimental results showed that the proposed method produces fixed image with extensive features on multimodality.
This paper presents a novel features mining approach from documents that could not be mined via optical character recognition (OCR). By identifying the intimate relationship between the text and graphical components, the proposed technique pulls out the Start, End, and Exact values for each bar. Furthermore, the word 2-gram and Euclidean distance methods are used to accurately detect and determine plagiarism in bar charts.
Cognitive radio (CR) enables unlicensed users (or secondary users, SUs) to sense for and exploit underutilized licensed spectrum owned by the licensed users (or primary users, PUs). Reinforcement learning (RL) is an artificial intelligence approach that enables a node to observe, learn, and make appropriate decisions on action selection in order to maximize network performance. Routing enables a source node to search for a least-cost route to its destination node. While there have been increasing efforts to enhance the traditional RL approach for routing in wireless networks, this research area remains largely unexplored in the domain of routing in CR networks. This paper applies RL in routing and investigates the effects of various features of RL (i.e., reward function, exploitation, and exploration, as well as learning rate) through simulation. New approaches and recommendations are proposed to enhance the features in order to improve the network performance brought about by RL to routing. Simulation results show that the RL parameters of the reward function, exploitation, and exploration, as well as learning rate, must be well regulated, and the new approaches proposed in this paper improves SUs' network performance without significantly jeopardizing PUs' network performance, specifically SUs' interference to PUs.
To date, cancer of uterine cervix is still a leading cause of cancer-related deaths in women worldwide. The current methods (i.e., Pap smear and liquid-based cytology (LBC)) to screen for cervical cancer are time-consuming and dependent on the skill of the cytopathologist and thus are rather subjective. Therefore, this paper presents an intelligent computer vision system to assist pathologists in overcoming these problems and, consequently, produce more accurate results. The developed system consists of two stages. In the first stage, the automatic features extraction (AFE) algorithm is performed. In the second stage, a neuro-fuzzy model called multiple adaptive neuro-fuzzy inference system (MANFIS) is proposed for recognition process. The MANFIS contains a set of ANFIS models which are arranged in parallel combination to produce a model with multi-input-multioutput structure. The system is capable of classifying cervical cell image into three groups, namely, normal, low-grade squamous intraepithelial lesion (LSIL) and high-grade squamous intraepithelial lesion (HSIL). The experimental results prove the capability of the AFE algorithm to be as effective as the manual extraction by human experts, while the proposed MANFIS produces a good classification performance with 94.2% accuracy.
To investigate whether the craniofacial sagittal jaw relationship in patients with non-syndromic cleft differed from non-cleft (NC) individuals by artificial intelligence (A.I.)-driven lateral cephalometric (Late. Ceph.) analysis. The study group comprised 123 subjects with different types of clefts including 29 = BCLP (bilateral cleft lip and palate), 41 = UCLP (unilateral cleft lip and palate), 9 = UCLA (unilateral cleft lip and alveolus), 13 = UCL (unilateral cleft lip) and NC = 31. The mean age was 14.77 years. SNA, SNB, ANB angle and Wits appraisal was measured in lateral cephalogram using a new innovative A.I driven Webceph software. Two-way ANOVA and multiple-comparison statistics tests were applied to see the differences between gender and among different types of clefts vs. NC individuals. A significant decrease (p < 0.005) in SNA, ANB, Wits appraisal was observed in different types of clefts vs. NC individuals. SNB (p > 0.005) showed insignificant variables in relation to type of clefts. No significant difference was also found in terms of gender in relation to any type of clefts and NC group. The present study advocates a decrease in sagittal development (SNA, ANB and Wits appraisal) in different types of cleft compared to NC individuals.
Recently, Artificial Intelligence (AI) has been used widely in medicine and health care sector. In machine learning, the classification or prediction is a major field of AI. Today, the study of existing predictive models based on machine learning methods is extremely active. Doctors need accurate predictions for the outcomes of their patients' diseases. In addition, for accurate predictions, timing is another significant factor that influences treatment decisions. In this paper, existing predictive models in medicine and health care have critically reviewed. Furthermore, the most famous machine learning methods have explained, and the confusion between a statistical approach and machine learning has clarified. A review of related literature reveals that the predictions of existing predictive models differ even when the same dataset is used. Therefore, existing predictive models are essential, and current methods must be improved.