Image segmentation is a fundamental but essential step in image processing because it dramatically influences posterior image analysis. Multilevel thresholding image segmentation is one of the most popular image segmentation techniques, and many researchers have used meta-heuristic optimization algorithms (MAs) to determine the threshold values. However, MAs have some defects; for example, they are prone to stagnate in local optimal and slow convergence speed. This paper proposes an enhanced slime mould algorithm for global optimization and multilevel thresholding image segmentation, namely ESMA. First, the Levy flight method is used to improve the exploration ability of SMA. Second, quasi opposition-based learning is introduced to enhance the exploitation ability and balance the exploration and exploitation. Then, the superiority of the proposed work ESMA is confirmed concerning the 23 benchmark functions. Afterward, the ESMA is applied in multilevel thresholding image segmentation using minimum cross-entropy as the fitness function. We select eight greyscale images as the benchmark images for testing and compare them with the other classical and state-of-the-art algorithms. Meanwhile, the experimental metrics include the average fitness (mean), standard deviation (Std), peak signal to noise ratio (PSNR), structure similarity index (SSIM), feature similarity index (FSIM), and Wilcoxon rank-sum test, which is utilized to evaluate the quality of segmentation. Experimental results demonstrated that ESMA is superior to other algorithms and can provide higher segmentation accuracy.
In this paper, the Red Deer algorithm (RDA), a recent population-based meta-heuristic algorithm, is thoroughly reviewed. The RD algorithm combines the survival of the fittest principle from the evolutionary algorithms and the productivity and richness of heuristic search techniques. Different variants and hybrids of this algorithm are presented and investigated. All the applications that were solved with this algorithm are presented. It is crucial to analyze the performance of this algorithm, therefore, the paper sheds light on the algorithm unique features and weaknesses covering the applications that are primarily suitable for it. The conclusions are presented, and further recommendations are suggested based on the review and analysis covered. The readers of this paper will have an understanding of the RD algorithm and its variants and, consequently, decide how suitable this algorithm is for their own business, research, or industrial applications.
The prairie dog optimization (PDO) algorithm is a metaheuristic optimization algorithm that simulates the daily behavior of prairie dogs. The prairie dog groups have a unique mode of information exchange. They divide into several small groups to search for food based on special signals and build caves around the food sources. When encountering natural enemies, they emit different sound signals to remind their companions of the dangers. According to this unique information exchange mode, we propose a randomized audio signal factor to simulate the specific sounds of prairie dogs when encountering different foods or natural enemies. This strategy restores the prairie dog habitat and improves the algorithm's merit-seeking ability. In the initial stage of the algorithm, chaotic tent mapping is also added to initialize the population of prairie dogs and increase population diversity, even use lens opposition-based learning strategy to enhance the algorithm's global exploration ability. To verify the optimization performance of the modified prairie dog optimization algorithm, we applied it to 23 benchmark test functions, IEEE CEC2014 test functions, and six engineering design problems for testing. The experimental results illustrated that the modified prairie dog optimization algorithm has good optimization performance.
Arithmetic optimization algorithm (AOA) is a newly proposed meta-heuristic method which is inspired by the arithmetic operators in mathematics. However, the AOA has the weaknesses of insufficient exploration capability and is likely to fall into local optima. To improve the searching quality of original AOA, this paper presents an improved AOA (IAOA) integrated with proposed forced switching mechanism (FSM). The enhanced algorithm uses the random math optimizer probability (RMOP) to increase the population diversity for better global search. And then the forced switching mechanism is introduced into the AOA to help the search agents jump out of the local optima. When the search agents cannot find better positions within a certain number of iterations, the proposed FSM will make them conduct the exploratory behavior. Thus the cases of being trapped into local optima can be avoided effectively. The proposed IAOA is extensively tested by twenty-three classical benchmark functions and ten CEC2020 test functions and compared with the AOA and other well-known optimization algorithms. The experimental results show that the proposed algorithm is superior to other comparative algorithms on most of the test functions. Furthermore, the test results of two training problems of multi-layer perceptron (MLP) and three classical engineering design problems also indicate that the proposed IAOA is highly effective when dealing with real-world problems.
Recently, the concept of the internet of things and its services has emerged with cloud computing. Cloud computing is a modern technology for dealing with big data to perform specified operations. The cloud addresses the problem of selecting and placing iterations across nodes in fog computing. Previous studies focused on original swarm intelligent and mathematical models; thus, we proposed a novel hybrid method based on two modern metaheuristic algorithms. This paper combined the Aquila Optimizer (AO) algorithm with the elephant herding optimization (EHO) for solving dynamic data replication problems in the fog computing environment. In the proposed method, we present a set of objectives that determine data transmission paths, choose the least cost path, reduce network bottlenecks, bandwidth, balance, and speed data transfer rates between nodes in cloud computing. A hybrid method, AOEHO, addresses the optimal and least expensive path, determines the best replication via cloud computing, and determines optimal nodes to select and place data replication near users. Moreover, we developed a multi-objective optimization based on the proposed AOEHO to decrease the bandwidth and enhance load balancing and cloud throughput. The proposed method is evaluated based on data replication using seven criteria. These criteria are data replication access, distance, costs, availability, SBER, popularity, and the Floyd algorithm. The experimental results show the superiority of the proposed AOEHO strategy performance over other algorithms, such as bandwidth, distance, load balancing, data transmission, and least cost path.
Instead of the cloud, the Internet of things (IoT) activities are offloaded into fog computing to boost the quality of services (QoSs) needed by many applications. However, the availability of continuous computing resources on fog computing servers is one of the restrictions for IoT applications since transmitting the large amount of data generated using IoT devices would create network traffic and cause an increase in computational overhead. Therefore, task scheduling is the main problem that needs to be solved efficiently. This study proposes an energy-aware model using an enhanced arithmetic optimization algorithm (AOA) method called AOAM, which addresses fog computing's job scheduling problem to maximize users' QoSs by maximizing the makespan measure. In the proposed AOAM, we enhanced the conventional AOA searchability using the marine predators algorithm (MPA) search operators to address the diversity of the used solutions and local optimum problems. The proposed AOAM is validated using several parameters, including various clients, data centers, hosts, virtual machines, tasks, and standard evaluation measures, including the energy and makespan. The obtained results are compared with other state-of-the-art methods; it showed that AOAM is promising and solved task scheduling effectively compared with the other comparative methods.
Agriculture plays a pivotal role in the economic development of a nation, but, growth of agriculture is affected badly by the many factors one such is plant diseases. Early stage prediction of these disease is crucial role for global health and even for game changers the farmer's life. Recently, adoption of modern technologies, such as the Internet of Things (IoT) and deep learning concepts has given the brighter light of inventing the intelligent machines to predict the plant diseases before it is deep-rooted in the farmlands. But, precise prediction of plant diseases is a complex job due to the presence of noise, changes in the intensities, similar resemblance between healthy and diseased plants and finally dimension of plant leaves. To tackle this problem, high-accurate and intelligently tuned deep learning algorithms are mandatorily needed. In this research article, novel ensemble of Swin transformers and residual convolutional networks are proposed. Swin transformers (ST) are hierarchical structures with linearly scalable computing complexity that offer performance and flexibility at various scales. In order to extract the best deep key-point features, the Swin transformers and residual networks has been combined, followed by Feed forward networks for better prediction. Extended experimentation is conducted using Plant Village Kaggle datasets, and performance metrics, including accuracy, precision, recall, specificity, and F1-rating, are evaluated and analysed. Existing structure along with FCN-8s, CED-Net, SegNet, DeepLabv3, Dense nets, and Central nets are used to demonstrate the superiority of the suggested version. The experimental results show that in terms of accuracy, precision, recall, and F1-rating, the introduced version shown better performances than the other state-of-art hybrid learning models.
Deep Convolutional Neural Networks (DCNNs) have shown remarkable success in image classification tasks, but optimizing their hyperparameters can be challenging due to their complex structure. This paper develops the Adaptive Habitat Biogeography-Based Optimizer (AHBBO) for tuning the hyperparameters of DCNNs in image classification tasks. In complicated optimization problems, the BBO suffers from premature convergence and insufficient exploration. In this regard, an adaptable habitat is presented as a solution to these problems; it would permit variable habitat sizes and regulated mutation. Better optimization performance and a greater chance of finding high-quality solutions across a wide range of problem domains are the results of this modification's increased exploration and population diversity. AHBBO is tested on 53 benchmark optimization functions and demonstrates its effectiveness in improving initial stochastic solutions and converging faster to the optimum. Furthermore, DCNN-AHBBO is compared to 23 well-known image classifiers on nine challenging image classification problems and shows superior performance in reducing the error rate by up to 5.14%. Our proposed algorithm outperforms 13 benchmark classifiers in 87 out of 95 evaluations, providing a high-performance and reliable solution for optimizing DNNs in image classification tasks. This research contributes to the field of deep learning by proposing a new optimization algorithm that can improve the efficiency of deep neural networks in image classification.
Digital image processing has witnessed a significant transformation, owing to the adoption of deep learning (DL) algorithms, which have proven to be vastly superior to conventional methods for crop detection. These DL algorithms have recently found successful applications across various domains, translating input data, such as images of afflicted plants, into valuable insights, like the identification of specific crop diseases. This innovation has spurred the development of cutting-edge techniques for early detection and diagnosis of crop diseases, leveraging tools such as convolutional neural networks (CNN), K-nearest neighbour (KNN), support vector machines (SVM), and artificial neural networks (ANN). This paper offers an all-encompassing exploration of the contemporary literature on methods for diagnosing, categorizing, and gauging the severity of crop diseases. The review examines the performance analysis of the latest machine learning (ML) and DL techniques outlined in these studies. It also scrutinizes the methodologies and datasets and outlines the prevalent recommendations and identified gaps within different research investigations. As a conclusion, the review offers insights into potential solutions and outlines the direction for future research in this field. The review underscores that while most studies have concentrated on traditional ML algorithms and CNN, there has been a noticeable dearth of focus on emerging DL algorithms like capsule neural networks and vision transformers. Furthermore, it sheds light on the fact that several datasets employed for training and evaluating DL models have been tailored to suit specific crop types, emphasizing the pressing need for a comprehensive and expansive image dataset encompassing a wider array of crop varieties. Moreover, the survey draws attention to the prevailing trend where the majority of research endeavours have concentrated on individual plant diseases, ML, or DL algorithms. In light of this, it advocates for the development of a unified framework that harnesses an ensemble of ML and DL algorithms to address the complexities of multiple plant diseases effectively.
Moth-flame optimization (MFO) algorithm inspired by the transverse orientation of moths toward the light source is an effective approach to solve global optimization problems. However, the MFO algorithm suffers from issues such as premature convergence, low population diversity, local optima entrapment, and imbalance between exploration and exploitation. In this study, therefore, an improved moth-flame optimization (I-MFO) algorithm is proposed to cope with canonical MFO's issues by locating trapped moths in local optimum via defining memory for each moth. The trapped moths tend to escape from the local optima by taking advantage of the adapted wandering around search (AWAS) strategy. The efficiency of the proposed I-MFO is evaluated by CEC 2018 benchmark functions and compared against other well-known metaheuristic algorithms. Moreover, the obtained results are statistically analyzed by the Friedman test on 30, 50, and 100 dimensions. Finally, the ability of the I-MFO algorithm to find the best optimal solutions for mechanical engineering problems is evaluated with three problems from the latest test-suite CEC 2020. The experimental and statistical results demonstrate that the proposed I-MFO is significantly superior to the contender algorithms and it successfully upgrades the shortcomings of the canonical MFO.
The forecasting and prediction of crude oil are necessary in enabling governments to compile their economic plans. Artificial neural networks (ANN) have been widely used in different forecasting and prediction applications, including in the oil industry. The dendritic neural regression (DNR) model is an ANNs that has showed promising performance in time-series prediction. The DNR has the capability to deal with the nonlinear characteristics of historical data for time-series forecasting applications. However, it faces certain limitations in training and configuring its parameters. To this end, we utilized the power of metaheuristic optimization algorithms to boost the training process and optimize its parameters. A comprehensive evaluation is presented in this study with six MH optimization algorithms used for this purpose: whale optimization algorithm (WOA), particle swarm optimization algorithm (PSO), genetic algorithm (GA), sine-cosine algorithm (SCA), differential evolution (DE), and harmony search algorithm (HS). We used oil-production datasets for historical records of crude oil production from seven real-world oilfields (from Tahe oilfields, in China), provided by a local partner. Extensive evaluation experiments were carried out using several performance measures to study the validity of the DNR with MH optimization methods in time-series applications. The findings of this study have confirmed the applicability of MH with DNR. The applications of MH methods improved the performance of the original DNR. We also concluded that the PSO and WOA achieved the best performance compared with other methods.
Differential evolution (DE) is one of the highly acknowledged population-based optimization algorithms due to its simplicity, user-friendliness, resilience, and capacity to solve problems. DE has grown steadily since its beginnings due to its ability to solve various issues in academics and industry. Different mutation techniques and parameter choices influence DE's exploration and exploitation capabilities, motivating academics to continue working on DE. This survey aims to depict DE's recent developments concerning parameter adaptations, parameter settings and mutation strategies, hybridizations, and multi-objective variants in the last twelve years. It also summarizes the problems solved in image processing by DE and its variants.
Sonar sound datasets are of significant importance in the domains of underwater surveillance and marine research as they enable experts to discern intricate patterns within the depths of the water. Nevertheless, the task of classifying sonar sound datasets continues to pose significant challenges. In this study, we present a novel approach aimed at enhancing the precision and efficacy of sonar sound dataset classification. The integration of deep long-short-term memory (DLSTM) and convolutional neural networks (CNNs) models is employed in order to capitalize on their respective advantages while also utilizing distinctive feature engineering techniques to achieve the most favorable outcomes. Although DLSTM networks have demonstrated effectiveness in tasks involving sequence classification, attaining their optimal performance necessitates careful adjustment of hyperparameters. While traditional methods such as grid and random search are effective, they frequently encounter challenges related to computational inefficiencies. This study aims to investigate the unexplored capabilities of the fuzzy slime mould optimizer (FUZ-SMO) in the context of LSTM hyperparameter tuning, with the objective of addressing the existing research gap in this area. Drawing inspiration from the adaptive behavior exhibited by slime moulds, the FUZ-SMO proposes a novel approach to optimization. The amalgamated model, which combines CNN, LSTM, fuzzy, and SMO, exhibits a notable improvement in classification accuracy, outperforming conventional LSTM architectures by a margin of 2.142%. This model not only demonstrates accelerated convergence milestones but also displays significant resilience against overfitting tendencies.
The use of mesenchymal stem cells (MSCs) in cartilage regeneration has gained significant attention in regenerative medicine. This paper reviews the molecular mechanisms underlying MSC-based cartilage regeneration and explores various therapeutic strategies to enhance the efficacy of MSCs in this context. MSCs exhibit multipotent capabilities and can differentiate into various cell lineages under specific microenvironmental cues. Chondrogenic differentiation, a complex process involving signaling pathways, transcription factors, and growth factors, plays a pivotal role in the successful regeneration of cartilage tissue. The chondrogenic differentiation of MSCs is tightly regulated by growth factors and signaling pathways such as TGF-β, BMP, Wnt/β-catenin, RhoA/ROCK, NOTCH, and IHH (Indian hedgehog). Understanding the intricate balance between these pathways is crucial for directing lineage-specific differentiation and preventing undesirable chondrocyte hypertrophy. Additionally, paracrine effects of MSCs, mediated by the secretion of bioactive factors, contribute significantly to immunomodulation, recruitment of endogenous stem cells, and maintenance of chondrocyte phenotype. Pre-treatment strategies utilized to potentiate MSCs, such as hypoxic conditions, low-intensity ultrasound, kartogenin treatment, and gene editing, are also discussed for their potential to enhance MSC survival, differentiation, and paracrine effects. In conclusion, this paper provides a comprehensive overview of the molecular mechanisms involved in MSC-based cartilage regeneration and outlines promising therapeutic strategies. The insights presented contribute to the ongoing efforts in optimizing MSC-based therapies for effective cartilage repair.
Large-scale solar energy production is still a great deal of obstruction due to the unpredictability of solar power. The intermittent, chaotic, and random quality of solar energy supply has to be dealt with by some comprehensive solar forecasting technologies. Despite forecasting for the long-term, it becomes much more essential to predict short-term forecasts in minutes or even seconds prior. Because key factors such as sudden movement of the clouds, instantaneous deviation of temperature in ambiance, the increased proportion of relative humidity and uncertainty in the wind velocities, haziness, and rains cause the undesired up and down ramping rates, thereby affecting the solar power generation to a greater extent. This paper aims to acknowledge the extended stellar forecasting algorithm using artificial neural network common sensical aspect. Three layered systems have been suggested, consisting of an input layer, hidden layer, and output layer feed-forward in conjunction with back propagation. A prior 5-min te output forecast fed to the input layer to reduce the error has been introduced to have a more precise forecast. Weather remains the most vital input for the ANN type of modeling. The forecasting errors might enhance considerably, thereby affecting the solar power supply relatively due to the variations in the solar irradiations and temperature on any forecasting day. Prior approximation of stellar radiations exhibits a small amount of qualm depending upon climatic conditions such as temperature, shading conditions, soiling effects, relative humidity, etc. All these environmental factors incorporate uncertainty regarding the prediction of the output parameter. In such a case, the approximation of PV output could be much more suitable than direct solar radiation. This paper uses Gradient Descent (GD) and Levenberg Maquarndt Artificial Neural Network (LM-ANN) techniques to apply to data obtained and recorded milliseconds from a 100 W solar panel. The essential purpose of this paper is to establish a time perspective with the greatest deal for the output forecast of small solar power utilities. It has been observed that 5 ms to 12 h time perspective gives the best short- to medium-term prediction for April. A case study has been done in the Peer Panjal region. The data collected for four months with various parameters have been applied randomly as input data using GD and LM type of artificial neural network compared to actual solar energy data. The proposed ANN based algorithm has been used for unswerving petite term forecasting. The model output has been presented in root mean square error and mean absolute percentage error. The results exhibit a improved concurrence between the forecasted and real models. The forecasting of solar energy and load variations assists in fulfilling the cost-effective aspects.
This paper proposes a modified version of the Dwarf Mongoose Optimization Algorithm (IDMO) for constrained engineering design problems. This optimization technique modifies the base algorithm (DMO) in three simple but effective ways. First, the alpha selection in IDMO differs from the DMO, where evaluating the probability value of each fitness is just a computational overhead and contributes nothing to the quality of the alpha or other group members. The fittest dwarf mongoose is selected as the alpha, and a new operator ω is introduced, which controls the alpha movement, thereby enhancing the exploration ability and exploitability of the IDMO. Second, the scout group movements are modified by randomization to introduce diversity in the search process and explore unvisited areas. Finally, the babysitter's exchange criterium is modified such that once the criterium is met, the babysitters that are exchanged interact with the dwarf mongoose exchanging them to gain information about food sources and sleeping mounds, which could result in better-fitted mongooses instead of initializing them afresh as done in DMO, then the counter is reset to zero. The proposed IDMO was used to solve the classical and CEC 2020 benchmark functions and 12 continuous/discrete engineering optimization problems. The performance of the IDMO, using different performance metrics and statistical analysis, is compared with the DMO and eight other existing algorithms. In most cases, the results show that solutions achieved by the IDMO are better than those obtained by the existing algorithms.
This paper introduces a comprehensive survey of a new population-based algorithm so-called gradient-based optimizer (GBO) and analyzes its major features. GBO considers as one of the most effective optimization algorithm where it was utilized in different problems and domains, successfully. This review introduces set of related works of GBO where distributed into; GBO variants, GBO applications, and evaluate the efficiency of GBO compared with other metaheuristic algorithms. Finally, the conclusions concentrate on the existing work on GBO, showing its disadvantages, and propose future works. The review paper will be helpful for the researchers and practitioners of GBO belonging to a wide range of audiences from the domains of optimization, engineering, medical, data mining and clustering. As well, it is wealthy in research on health, environment and public safety. Also, it will aid those who are interested by providing them with potential future research.
Customer churn remains a critical challenge in telecommunications, necessitating effective churn prediction (CP) methodologies. This paper introduces the Enhanced Gradient Boosting Model (EGBM), which uses a Support Vector Machine with a Radial Basis Function kernel (SVMRBF) as a base learner and exponential loss function to enhance the learning process of the GBM. The novel base learner significantly improves the initial classification performance of the traditional GBM and achieves enhanced performance in CP-EGBM after multiple boosting stages by utilizing state-of-the-art decision tree learners. Further, a modified version of Particle Swarm Optimization (PSO) using the consumption operator of the Artificial Ecosystem Optimization (AEO) method to prevent premature convergence of the PSO in the local optima is developed to tune the hyper-parameters of the CP-EGBM effectively. Seven open-source CP datasets are used to evaluate the performance of the developed CP-EGBM model using several quantitative evaluation metrics. The results showed that the CP-EGBM is significantly better than GBM and SVM models. Results are statistically validated using the Friedman ranking test. The proposed CP-EGBM is also compared with recently reported models in the literature. Comparative analysis with state-of-the-art models showcases CP-EGBM's promising improvements, making it a robust and effective solution for churn prediction in the telecommunications industry.
In this study, we tackle the challenge of optimizing the design of a Brushless Direct Current (BLDC) motor. Utilizing an established analytical model, we introduced the Multi-Objective Generalized Normal Distribution Optimization (MOGNDO) method, a biomimetic approach based on Pareto optimality, dominance, and external archiving. We initially tested MOGNDO on standard multi-objective benchmark functions, where it showed strong performance. When applied to the BLDC motor design with the objectives of either maximizing operational efficiency or minimizing motor mass, the MOGNDO algorithm consistently outperformed other techniques like Ant Lion Optimizer (ALO), Ion Motion Optimization (IMO), and Sine Cosine Algorithm (SCA). Specifically, MOGNDO yielded the most optimal values across efficiency and mass metrics, providing practical solutions for real-world BLDC motor design. The MOGNDO source code is available at: https://github.com/kanak02/MOGNDO.