This paper presents novel research on the development of a generic intelligent oil fraction sensor based on Electrical Capacitance Tomography (ECT) data. An artificial Neural Network (ANN) has been employed as the intelligent system to sense and estimate oil fractions from the cross-sections of two-component flows comprising oil and gas in a pipeline. Previous works only focused on estimating the oil fraction in the pipeline based on fixed ECT sensor parameters. With fixed ECT design sensors, an oil fraction neural sensor can be trained to deal with ECT data based on the particular sensor parameters, hence the neural sensor is not generic. This work focuses on development of a generic neural oil fraction sensor based on training a Multi-Layer Perceptron (MLP) ANN with various ECT sensor parameters. On average, the proposed oil fraction neural sensor has shown to be able to give a mean absolute error of 3.05% for various ECT sensor sizes.
The standard artificial bee colony (ABC) algorithm involves exploration and exploitation processes which need to be balanced for enhanced performance. This paper proposes a new modified ABC algorithm named JA-ABC5 to enhance convergence speed and improve the ability to reach the global optimum by balancing exploration and exploitation processes. New stages have been proposed at the earlier stages of the algorithm to increase the exploitation process. Besides that, modified mutation equations have also been introduced in the employed and onlooker-bees phases to balance the two processes. The performance of JA-ABC5 has been analyzed on 27 commonly used benchmark functions and tested to optimize the reactive power optimization problem. The performance results have clearly shown that the newly proposed algorithm has outperformed other compared algorithms in terms of convergence speed and global optimum achievement.
Enhancement of captured hand vein images is essential for a number of purposes, such as accurate biometric identification and ease of medical intravenous access. This paper presents an improved hand vein image enhancement technique based on weighted average fusion of contrast limited adaptive histogram equalization (CLAHE) and fuzzy adaptive gamma (FAG). The proposed technique is applied using three stages. Firstly, grey level intensities with CLAHE are locally applied to image pixels for contrast enhancement. Secondly, the grey level intensities are then globally transformed into membership planes and modified with FAG operator for the same purposes. Finally, the resultant images from CLAHE and FAG are fused using improved weighted averaging methods for clearer vein patterns. Then, matched filter with first-order derivative Gaussian (MF-FODG) is employed to segment vein patterns. The proposed technique was tested on self-acquired dorsal hand vein images as well as images from the SUAS databases. The performance of the proposed technique is compared with various other image enhancement techniques based on mean square error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity index measurement (SSIM). The proposed enhancement technique's impact on the segmentation process has also been evaluated using sensitivity, accuracy, and dice coefficient. The experimental results show that the proposed enhancement technique can significantly enhance the hand vein patterns and improve the detection of dorsal hand veins.
This paper presents a comparison between data from single modality and fusion methods to classify Tualang honey as pure or adulterated using Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA) statistical classification approaches. Ten different brands of certified pure Tualang honey were obtained throughout peninsular Malaysia and Sumatera, Indonesia. Various concentrations of two types of sugar solution (beet and cane sugar) were used in this investigation to create honey samples of 20%, 40%, 60% and 80% adulteration concentrations. Honey data extracted from an electronic nose (e-nose) and Fourier Transform Infrared Spectroscopy (FTIR) were gathered, analyzed and compared based on fusion methods. Visual observation of classification plots revealed that the PCA approach able to distinct pure and adulterated honey samples better than the LDA technique. Overall, the validated classification results based on FTIR data (88.0%) gave higher classification accuracy than e-nose data (76.5%) using the LDA technique. Honey classification based on normalized low-level and intermediate-level FTIR and e-nose fusion data scored classification accuracies of 92.2% and 88.7%, respectively using the Stepwise LDA method. The results suggested that pure and adulterated honey samples were better classified using FTIR and e-nose fusion data than single modality data.
Ripeness classification of oil palm fresh fruit bunches (FFBs) during harvesting is important to ensure that they are harvested during optimum stage for maximum oil production. This paper presents the application of color vision for automated ripeness classification of oil palm FFB. Images of oil palm FFBs of type DxP Yangambi were collected and analyzed using digital image processing techniques. Then the color features were extracted from those images and used as the inputs for Artificial Neural Network (ANN) learning. The performance of the ANN for ripeness classification of oil palm FFB was investigated using two methods: training ANN with full features and training ANN with reduced features based on the Principal Component Analysis (PCA) data reduction technique. Results showed that compared with using full features in ANN, using the ANN trained with reduced features can improve the classification accuracy by 1.66% and is more effective in developing an automated ripeness classifier for oil palm FFB. The developed ripeness classifier can act as a sensor in determining the correct oil palm FFB ripeness category.