Displaying publications 1 - 20 of 49 in total

Abstract:
Sort:
  1. Lim WL, Wibowo A, Desa MI, Haron H
    Comput Intell Neurosci, 2016;2016:5803893.
    PMID: 26819585 DOI: 10.1155/2016/5803893
    The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them.
  2. Odili JB, Noraziah A, Zarina M
    Comput Intell Neurosci, 2021;2021:6625438.
    PMID: 33986793 DOI: 10.1155/2021/6625438
    This paper presents a comparative performance analysis of some metaheuristics such as the African Buffalo Optimization algorithm (ABO), Improved Extremal Optimization (IEO), Model-Induced Max-Min Ant Colony Optimization (MIMM-ACO), Max-Min Ant System (MMAS), Cooperative Genetic Ant System (CGAS), and the heuristic, Randomized Insertion Algorithm (RAI) to solve the asymmetric Travelling Salesman Problem (ATSP). Quite unlike the symmetric Travelling Salesman Problem, there is a paucity of research studies on the asymmetric counterpart. This is quite disturbing because most real-life applications are actually asymmetric in nature. These six algorithms were chosen for their performance comparison because they have posted some of the best results in literature and they employ different search schemes in attempting solutions to the ATSP. The comparative algorithms in this study employ different techniques in their search for solutions to ATSP: the African Buffalo Optimization employs the modified Karp-Steele mechanism, Model-Induced Max-Min Ant Colony Optimization (MIMM-ACO) employs the path construction with patching technique, Cooperative Genetic Ant System uses natural selection and ordering; Randomized Insertion Algorithm uses the random insertion approach, and the Improved Extremal Optimization uses the grid search strategy. After a number of experiments on the popular but difficult 15 out of the 19 ATSP instances in TSPLIB, the results show that the African Buffalo Optimization algorithm slightly outperformed the other algorithms in obtaining the optimal results and at a much faster speed.
  3. Yin LL, Qin YW, Hou Y, Ren ZJ
    Comput Intell Neurosci, 2022;2022:7825597.
    PMID: 35463225 DOI: 10.1155/2022/7825597
    At present, there are widespread financing difficulties in China's trade circulation industry. Supply chain finance can provide financing for small- and medium-sized enterprises in China's trade circulation industry, but it will produce financing risks such as credit risks. It is necessary to analyze the causes of the risks in the supply chain finance of the trade circulation industry and measure these risks by establishing a credit risk assessment system. In this article, a supply chain financial risk early warning index system is established, including 4 first-level indicators and 29 third-level indicators. Then, on the basis of the supply chain financial risk early warning index system, combined with the method of convolution neural network, the supply chain financial risk early warning model of trade circulation industry is constructed, and the evaluation index is measured by the method of principal component analysis. Finally, the relevant data of trade circulation enterprises are selected to make an empirical analysis of the model. The conclusion shows that the supply chain financial risk early warning model and risk control measures established in this article have certain reference value for the commercial circulation industry to carry out supply chain finance. It also provides guidance for trade circulation enterprises to deal with supply chain financial risks effectively.
  4. Babiker A, Faye I
    Comput Intell Neurosci, 2021;2021:6617462.
    PMID: 33564299 DOI: 10.1155/2021/6617462
    Situational interest (SI) is one of the promising states that can improve student's learning and increase the acquired knowledge. Electroencephalogram- (EEG-) based detection of SI could assist in understanding SI neuroscientific causes that, as a result, could explain the SI role in student's learning. In this study, 26 participants were selected based on questionnaires to participate in the mathematics classroom experiment. SI and personal interest (PI) questionnaires along with knowledge tests were undertaken to measure student's interest and knowledge levels. A hybrid method combining empirical mode decomposition (EMD) and wavelet transform was developed and employed for feature extraction. The proposed method showed significant difference using the multivariate analysis of variance (MANOVA) test and consistently outperformed other methods in the classification performance using weighted k-nearest neighbours (wkNN). The high classification accuracy of 85.7% with the sensitivity of 81.8% and specificity of 90% revealed that brain oscillation patterns of high SI students are somewhat different than students with low or no SI. In addition, the result suggests that the delta rhythm could have a significant effect on cognitive processing.
  5. Wang S, Liu Q, Liu Y, Jia H, Abualigah L, Zheng R, et al.
    Comput Intell Neurosci, 2021;2021:6379469.
    PMID: 34531910 DOI: 10.1155/2021/6379469
    Based on Salp Swarm Algorithm (SSA) and Slime Mould Algorithm (SMA), a novel hybrid optimization algorithm, named Hybrid Slime Mould Salp Swarm Algorithm (HSMSSA), is proposed to solve constrained engineering problems. SSA can obtain good results in solving some optimization problems. However, it is easy to suffer from local minima and lower density of population. SMA specializes in global exploration and good robustness, but its convergence rate is too slow to find satisfactory solutions efficiently. Thus, in this paper, considering the characteristics and advantages of both the above optimization algorithms, SMA is integrated into the leader position updating equations of SSA, which can share helpful information so that the proposed algorithm can utilize these two algorithms' advantages to enhance global optimization performance. Furthermore, Levy flight is utilized to enhance the exploration ability. It is worth noting that a novel strategy called mutation opposition-based learning is proposed to enhance the performance of the hybrid optimization algorithm on premature convergence avoidance, balance between exploration and exploitation phases, and finding satisfactory global optimum. To evaluate the efficiency of the proposed algorithm, HSMSSA is applied to 23 different benchmark functions of the unimodal and multimodal types. Additionally, five classical constrained engineering problems are utilized to evaluate the proposed technique's practicable abilities. The simulation results show that the HSMSSA method is more competitive and presents more engineering effectiveness for real-world constrained problems than SMA, SSA, and other comparative algorithms. In the end, we also provide some potential areas for future studies such as feature selection and multilevel threshold image segmentation.
  6. Khan ZA, Naz S, Khan R, Teo J, Ghani A, Almaiah MA
    Comput Intell Neurosci, 2022;2022:5112375.
    PMID: 35449734 DOI: 10.1155/2022/5112375
    Data redundancy or fusion is one of the common issues associated with the resource-constrained networks such as Wireless Sensor Networks (WSNs) and Internet of Things (IoTs). To resolve this issue, numerous data aggregation or fusion schemes have been presented in the literature. Generally, it is used to decrease the size of the collected data and, thus, improve the performance of the underlined IoTs in terms of congestion control, data accuracy, and lifetime. However, these approaches do not consider neighborhood information of the devices (cluster head in this case) in the data refinement phase. In this paper, a smart and intelligent neighborhood-enabled data aggregation scheme is presented where every device (cluster head) is bounded to refine the collected data before sending it to the concerned server module. For this purpose, the proposed data aggregation scheme is divided into two phases: (i) identification of neighboring nodes, which is based on the MAC address and location, and (ii) data aggregation using k-mean clustering algorithm and Support Vector Machine (SVM). Furthermore, every CH is smart enough to compare data sets of neighboring nodes only; that is, data of nonneighbor is not compared at all. These algorithms were implemented in Network Simulator 2 (NS-2) and were evaluated in terms of various performance metrics, such as the ratio of data redundancy, lifetime, and energy efficiency. Simulation results have verified that the proposed scheme performance is better than the existing approaches.
  7. Balogun AO, Basri S, Mahamad S, Capretz LF, Imam AA, Almomani MA, et al.
    Comput Intell Neurosci, 2021;2021:5069016.
    PMID: 34868291 DOI: 10.1155/2021/5069016
    The high dimensionality of software metric features has long been noted as a data quality problem that affects the performance of software defect prediction (SDP) models. This drawback makes it necessary to apply feature selection (FS) algorithm(s) in SDP processes. FS approaches can be categorized into three types, namely, filter FS (FFS), wrapper FS (WFS), and hybrid FS (HFS). HFS has been established as superior because it combines the strength of both FFS and WFS methods. However, selecting the most appropriate FFS (filter rank selection problem) for HFS is a challenge because the performance of FFS methods depends on the choice of datasets and classifiers. In addition, the local optima stagnation and high computational costs of WFS due to large search spaces are inherited by the HFS method. Therefore, as a solution, this study proposes a novel rank aggregation-based hybrid multifilter wrapper feature selection (RAHMFWFS) method for the selection of relevant and irredundant features from software defect datasets. The proposed RAHMFWFS is divided into two stepwise stages. The first stage involves a rank aggregation-based multifilter feature selection (RMFFS) method that addresses the filter rank selection problem by aggregating individual rank lists from multiple filter methods, using a novel rank aggregation method to generate a single, robust, and non-disjoint rank list. In the second stage, the aggregated ranked features are further preprocessed by an enhanced wrapper feature selection (EWFS) method based on a dynamic reranking strategy that is used to guide the feature subset selection process of the HFS method. This, in turn, reduces the number of evaluation cycles while amplifying or maintaining its prediction performance. The feasibility of the proposed RAHMFWFS was demonstrated on benchmarked software defect datasets with Naïve Bayes and Decision Tree classifiers, based on accuracy, the area under the curve (AUC), and F-measure values. The experimental results showed the effectiveness of RAHMFWFS in addressing filter rank selection and local optima stagnation problems in HFS, as well as the ability to select optimal features from SDP datasets while maintaining or enhancing the performance of SDP models. To conclude, the proposed RAHMFWFS achieved good performance by improving the prediction performances of SDP models across the selected datasets, compared to existing state-of-the-arts HFS methods.
  8. Jaafar H, Ibrahim S, Ramli DA
    Comput Intell Neurosci, 2015;2015:360217.
    PMID: 26113861 DOI: 10.1155/2015/360217
    Mobile implementation is a current trend in biometric design. This paper proposes a new approach to palm print recognition, in which smart phones are used to capture palm print images at a distance. A touchless system was developed because of public demand for privacy and sanitation. Robust hand tracking, image enhancement, and fast computation processing algorithms are required for effective touchless and mobile-based recognition. In this project, hand tracking and the region of interest (ROI) extraction method were discussed. A sliding neighborhood operation with local histogram equalization, followed by a local adaptive thresholding or LHEAT approach, was proposed in the image enhancement stage to manage low-quality palm print images. To accelerate the recognition process, a new classifier, improved fuzzy-based k nearest centroid neighbor (IFkNCN), was implemented. By removing outliers and reducing the amount of training data, this classifier exhibited faster computation. Our experimental results demonstrate that a touchless palm print system using LHEAT and IFkNCN achieves a promising recognition rate of 98.64%.
  9. Zhang Q, Abdullah AR, Chong CW, Ali MH
    Comput Intell Neurosci, 2022;2022:8235308.
    PMID: 35126503 DOI: 10.1155/2022/8235308
    Gross domestic product (GDP) is an important indicator for determining a country's or region's economic status and development level, and it is closely linked to inflation, unemployment, and economic growth rates. These basic indicators can comprehensively and effectively reflect a country's or region's future economic development. The center of radial basis function neural network and smoothing factor to take a uniform distribution of the random radial basis function artificial neural network will be the focus of this study. This stochastic learning method is a useful addition to the existing methods for determining the center and smoothing factors of radial basis function neural networks, and it can also help the network more efficiently train. GDP forecasting is aided by the genetic algorithm radial basis neural network, which allows the government to make timely and effective macrocontrol plans based on the forecast trend of GDP in the region. This study uses the genetic algorithm radial basis, neural network model, to make judgments on the relationships contained in this sequence and compare and analyze the prediction effect and generalization ability of the model to verify the applicability of the genetic algorithm radial basis, neural network model, based on the modeling of historical data, which may contain linear and nonlinear relationships by itself, so this study uses the genetic algorithm radial basis, neural network model, to make, compare, and analyze judgments on the relationships contained in this sequence.
  10. Bamatraf S, Hussain M, Aboalsamh H, Qazi EU, Malik AS, Amin HU, et al.
    Comput Intell Neurosci, 2016;2016:8491046.
    PMID: 26819593 DOI: 10.1155/2016/8491046
    We studied the impact of 2D and 3D educational contents on learning and memory recall using electroencephalography (EEG) brain signals. For this purpose, we adopted a classification approach that predicts true and false memories in case of both short term memory (STM) and long term memory (LTM) and helps to decide whether there is a difference between the impact of 2D and 3D educational contents. In this approach, EEG brain signals are converted into topomaps and then discriminative features are extracted from them and finally support vector machine (SVM) which is employed to predict brain states. For data collection, half of sixty-eight healthy individuals watched the learning material in 2D format whereas the rest watched the same material in 3D format. After learning task, memory recall tasks were performed after 30 minutes (STM) and two months (LTM), and EEG signals were recorded. In case of STM, 97.5% prediction accuracy was achieved for 3D and 96.6% for 2D and, in case of LTM, it was 100% for both 2D and 3D. The statistical analysis of the results suggested that for learning and memory recall both 2D and 3D materials do not have much difference in case of STM and LTM.
  11. Arora S, Sawaran Singh NS, Singh D, Rakesh Shrivastava R, Mathur T, Tiwari K, et al.
    Comput Intell Neurosci, 2022;2022:9755422.
    PMID: 36531923 DOI: 10.1155/2022/9755422
    In this study, the air quality index (AQI) of Indian cities of different tiers is predicted by using the vanilla recurrent neural network (RNN). AQI is used to measure the air quality of any region which is calculated on the basis of the concentration of ground-level ozone, particle pollution, carbon monoxide, and sulphur dioxide in air. Thus, the present air quality of an area is dependent on current weather conditions, vehicle traffic in that area, or anything that increases air pollution. Also, the current air quality is dependent on the climate conditions and industrialization in that area. Thus, the AQI is history-dependent. To capture this dependency, the memory property of fractional derivatives is exploited in this algorithm and the fractional gradient descent algorithm involving Caputo's derivative has been used in the backpropagation algorithm for training of the RNN. Due to the availability of a large amount of data and high computation support, deep neural networks are capable of giving state-of-the-art results in the time series prediction. But, in this study, the basic vanilla RNN has been chosen to check the effectiveness of fractional derivatives. The AQI and gases affecting AQI prediction results for different cities show that the proposed algorithm leads to higher accuracy. It has been observed that the results of the vanilla RNN with fractional derivatives are comparable to long short-term memory (LSTM).
  12. Bukhari MM, Ghazal TM, Abbas S, Khan MA, Farooq U, Wahbah H, et al.
    Comput Intell Neurosci, 2022;2022:3606068.
    PMID: 35126487 DOI: 10.1155/2022/3606068
    Smart applications and intelligent systems are being developed that are self-reliant, adaptive, and knowledge-based in nature. Emergency and disaster management, aerospace, healthcare, IoT, and mobile applications, among them, revolutionize the world of computing. Applications with a large number of growing devices have transformed the current design of centralized cloud impractical. Despite the use of 5G technology, delay-sensitive applications and cloud cannot go parallel due to exceeding threshold values of certain parameters like latency, bandwidth, response time, etc. Middleware proves to be a better solution to cope up with these issues while satisfying the high requirements task offloading standards. Fog computing is recommended middleware in this research article in view of the fact that it provides the services to the edge of the network; delay-sensitive applications can be entertained effectively. On the contrary, fog nodes contain a limited set of resources that may not process all tasks, especially of computation-intensive applications. Additionally, fog is not the replacement of the cloud, rather supplement to the cloud, both behave like counterparts and offer their services correspondingly to compliance the task needs but fog computing has relatively closer proximity to the devices comparatively cloud. The problem arises when a decision needs to take what is to be offloaded: data, computation, or application, and more specifically where to offload: either fog or cloud and how much to offload. Fog-cloud collaboration is stochastic in terms of task-related attributes like task size, duration, arrival rate, and required resources. Dynamic task offloading becomes crucial in order to utilize the resources at fog and cloud to improve QoS. Since this formation of task offloading policy is a bit complex in nature, this problem is addressed in the research article and proposes an intelligent task offloading model. Simulation results demonstrate the authenticity of the proposed logistic regression model acquiring 86% accuracy compared to other algorithms and confidence in the predictive task offloading policy by making sure process consistency and reliability.
  13. Khan MF, Ghazal TM, Said RA, Fatima A, Abbas S, Khan MA, et al.
    Comput Intell Neurosci, 2021;2021:2487759.
    PMID: 34868288 DOI: 10.1155/2021/2487759
    The Internet of Medical Things (IoMT) enables digital devices to gather, infer, and broadcast health data via the cloud platform. The phenomenal growth of the IoMT is fueled by many factors, including the widespread and growing availability of wearables and the ever-decreasing cost of sensor-based technology. The cost of related healthcare will rise as the global population of elderly people grows in parallel with an overall life expectancy that demands affordable healthcare services, solutions, and developments. IoMT may bring revolution in the medical sciences in terms of the quality of healthcare of elderly people while entangled with machine learning (ML) algorithms. The effectiveness of the smart healthcare (SHC) model to monitor elderly people was observed by performing tests on IoMT datasets. For evaluation, the precision, recall, fscore, accuracy, and ROC values are computed. The authors also compare the results of the SHC model with different conventional popular ML techniques, e.g., support vector machine (SVM), K-nearest neighbor (KNN), and decision tree (DT), to analyze the effectiveness of the result.
  14. Shoaib MA, Chuah JH, Ali R, Hasikin K, Khalil A, Hum YC, et al.
    Comput Intell Neurosci, 2023;2023:4208231.
    PMID: 36756163 DOI: 10.1155/2023/4208231
    Cardiac health diseases are one of the key causes of death around the globe. The number of heart patients has considerably increased during the pandemic. Therefore, it is crucial to assess and analyze the medical and cardiac images. Deep learning architectures, specifically convolutional neural networks have profoundly become the primary choice for the assessment of cardiac medical images. The left ventricle is a vital part of the cardiovascular system where the boundary and size perform a significant role in the evaluation of cardiac function. Due to automatic segmentation and good promising results, the left ventricle segmentation using deep learning has attracted a lot of attention. This article presents a critical review of deep learning methods used for the left ventricle segmentation from frequently used imaging modalities including magnetic resonance images, ultrasound, and computer tomography. This study also demonstrates the details of the network architecture, software, and hardware used for training along with publicly available cardiac image datasets and self-prepared dataset details incorporated. The summary of the evaluation matrices with results used by different researchers is also presented in this study. Finally, all this information is summarized and comprehended in order to assist the readers to understand the motivation and methodology of various deep learning models, as well as exploring potential solutions to future challenges in LV segmentation.
  15. Liu K, Wang H, Xiao J, Taha Z
    Comput Intell Neurosci, 2015;2015:158478.
    PMID: 25866500 DOI: 10.1155/2015/158478
    The purpose of this research is to analyse the relationship between nonlinear dynamic character and individuals' standing balance by the largest Lyapunov exponent, which is regarded as a metric for assessing standing balance. According to previous study, the largest Lyapunov exponent from centre of pressure time series could not well quantify the human balance ability. In this research, two improvements were made. Firstly, an external stimulus was applied to feet in the form of continuous horizontal sinusoidal motion by a moving platform. Secondly, a multiaccelerometer subsystem was adopted. Twenty healthy volunteers participated in this experiment. A new metric, coordinated largest Lyapunov exponent was proposed, which reflected the relationship of body segments by integrating multidimensional largest Lyapunov exponent values. By using this metric in actual standing performance under sinusoidal stimulus, an obvious relationship between the new metric and the actual balance ability was found in the majority of the subjects. These results show that the sinusoidal stimulus can make human balance characteristics more obvious, which is beneficial to assess balance, and balance is determined by the ability of coordinating all body segments.
  16. Khan NA, Ibrahim Khalaf O, Andrés Tavera Romero C, Sulaiman M, Bakar MA
    Comput Intell Neurosci, 2022;2022:2710576.
    PMID: 35096038 DOI: 10.1155/2022/2710576
    In this study, the intelligent computational strength of neural networks (NNs) based on the backpropagated Levenberg-Marquardt (BLM) algorithm is utilized to investigate the numerical solution of nonlinear multiorder fractional differential equations (FDEs). The reference data set for the design of the BLM-NN algorithm for different examples of FDEs are generated by using the exact solutions. To obtain the numerical solutions, multiple operations based on training, validation, and testing on the reference data set are carried out by the design scheme for various orders of FDEs. The approximate solutions by the BLM-NN algorithm are compared with analytical solutions and performance based on mean square error (MSE), error histogram (EH), regression, and curve fitting. This further validates the accuracy, robustness, and efficiency of the proposed algorithm.
  17. Zhou X, Ruhaizin S, Zhu W, Shen C, He X
    Comput Intell Neurosci, 2022;2022:4667689.
    PMID: 35720909 DOI: 10.1155/2022/4667689
    The smart wheelchair is a service robot that can be used as a means of transportation for the elderly and the disabled. The patients were given an intelligent wheelchair designed by electroencephalogram (EEG), which was used for more than 8 hours and tested continuously for 1 month. By ridit analysis, the difference between the two groups was statistically significant (U = 3.72, P < 0.01). The scores of visual analogue scale (VAS) and joint ground visuality (JGV) in the observation group were significantly better than those in the control group. The modules of physiological function (PF), physical pain (PP), overall health (OH), vitality (VT), social function (SF), emotional function (EF), and mental health (MH) in the SF-36 scores of the two groups were significantly improved (P < 0.05), and the improvement of each module in the observation group was significantly better than that in the control group (P < 0.05). The levels of serum IL-6, IL-10, and superoxide dismutase (SOD) in the two groups were significantly improved (P < 0.05), and the improvement of serum IL-6, IL-10, and SOD in the observation group was significantly better than that in the control group (P < 0.05). It is suggested that neural engineering based on EEG characteristics can be well applied in comfort industrial design.
  18. Lai CQ, Ibrahim H, Abdullah MZ, Abdullah JM, Suandi SA, Azman A
    Comput Intell Neurosci, 2019;2019:7895924.
    PMID: 31281339 DOI: 10.1155/2019/7895924
    Biometric is an important field that enables identification of an individual to access their sensitive information and asset. In recent years, electroencephalography- (EEG-) based biometrics have been popularly explored by researchers because EEG is able to distinct between two individuals. The literature reviews have shown that convolutional neural network (CNN) is one of the classification approaches that can avoid the complex stages of preprocessing, feature extraction, and feature selection. Therefore, CNN is suggested to be one of the efficient classifiers for biometric identification. Conventionally, input to CNN can be in image or matrix form. The objective of this paper is to explore the arrangement of EEG for CNN input to investigate the most suitable input arrangement of EEG towards the performance of EEG-based identification. EEG datasets that are used in this paper are resting state eyes open (REO) and resting state eyes close (REC) EEG. Six types of data arrangement are compared in this paper. They are matrix of amplitude versus time, matrix of energy versus time, matrix of amplitude versus time for rearranged channels, image of amplitude versus time, image of energy versus time, and image of amplitude versus time for rearranged channels. It was found that the matrix of amplitude versus time for each rearranged channels using the combination of REC and REO performed the best for biometric identification, achieving validation accuracy and test accuracy of 83.21% and 79.08%, respectively.
  19. Dutta AK, Mageswari RU, Gayathri A, Dallfin Bruxella JM, Ishak MK, Mostafa SM, et al.
    Comput Intell Neurosci, 2022;2022:7776319.
    PMID: 35694571 DOI: 10.1155/2022/7776319
    Biomedical engineering involves ideologies and problem-solving methods of engineering to biology and medicine. Malaria is a life-threatening illness, which has gained significant attention among researchers. Since the manual diagnosis of malaria in a clinical setting is tedious, automated tools based on computational intelligence (CI) tools have gained considerable interest. Though earlier studies were focused on the handcrafted features, the diagnostic accuracy can be boosted through deep learning (DL) methods. This study introduces a new Barnacles Mating Optimizer with Deep Transfer Learning Enabled Biomedical Malaria Parasite Detection and Classification (BMODTL-BMPC) model. The presented BMODTL-BMPC model involves the design of intelligent models for the recognition and classification of malaria parasites. Initially, the Gaussian filtering (GF) approach is employed to eradicate noise in blood smear images. Then, Graph cuts (GC) segmentation technique is applied to determine the affected regions in the blood smear images. Moreover, the barnacles mating optimizer (BMO) algorithm with the NasNetLarge model is employed for the feature extraction process. Furthermore, the extreme learning machine (ELM) classification model is employed for the identification and classification of malaria parasites. To assure the enhanced outcomes of the BMODTL-BMPC technique, a wide-ranging experimentation analysis is performed using a benchmark dataset. The experimental results show that the BMODTL-BMPC technique outperforms other recent approaches.
  20. Zhang H, Feng Y, Wang L
    Comput Intell Neurosci, 2022;2022:3948221.
    PMID: 35909867 DOI: 10.1155/2022/3948221
    With the rapid development of image video and tourism economy, tourism economic data are gradually becoming big data. Therefore, how to schedule between data has become a hot topic. This paper first summarizes the research results on image video, cloud computing, tourism economy, and data scheduling algorithms. Secondly, the origin, structure, development, and service types of cloud computing are expounded in detail. And in order to solve the problem of tourism economic data scheduling, this paper regards the completion time and cross-node transmission delay as the constraints of tourism economic data scheduling. The constraint model of data scheduling is established, the fitness function is improved on the basis of an artificial immune algorithm combined with the constraint model, and the directional recombination of excellent antibodies is carried out by using the advantages of gene recombination so as to obtain the optimal solution to the problem more appropriately. When the resource node scale is 100, the response time of EDSA is 107.92 seconds.
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links