Abstract
The Spider Wasp Optimization (SWO) algorithm is a swarm intelligence optimization technique inspired by the collective behaviors of social animals. This algorithm, designed to address optimization challenges, emulates the unique hunting, nesting, and mating behaviors of female spider wasps. It offers several advantages, including rapid search speed and high solution accuracy. However, when tackling complex optimization problems, it can encounter issues such as getting trapped in local optima, slow early convergence, and the need for manual adjustment of the “Trade-off Rate” (TR) parameter for different problems.To improve the performance and versatility of the SWO algorithm, a Multi-strategy Improved Spider Wasp Optimizer (MISWO) is proposed. Firstly, the Grey Wolf Algorithm is integrated into the initialization phase to enhance early convergence and improve the fitness of the initial population, thereby boosting the algorithm’s global optimization capabilities.Secondly, an adaptive step size operator and Gaussian mutation are introduced during the search phase to automatically adjust the search range at different optimization stages. This enhancement increases both the optimization accuracy and the algorithm’s ability to avoid local optima. The Trade-off Rate (TR) is dynamically selected to better accommodate a variety of problems. Finally, a dynamic lens imaging reverse learning strategy is employed to update optimal individuals, further improving the algorithm’s capacity to escape local optima. To validate the effectiveness of MISWO, it was tested on 23 benchmark functions and 7 engineering optimization problems, and compared with several state-of-the-art algorithms. Experimental results show that MISWO outperforms other algorithms in terms of optimization capability, stability, and adaptability across diverse problems.
Similar content being viewed by others
Introduction
An optimization problem involves determining the best or a sufficiently good approximate solution from numerous possible options or parameter combinations, all under specific constraints. The core objective is to find the solution that maximizes or minimizes an objective function from a set of feasible solutions. In engineering applications, as the complexity and scale of problems continue to grow, traditional optimization methods, such as linear programming, steepest descent, conjugate gradient, and penalty function methods, have become inadequate for addressing complex problems or situations that require high-precision solutions. Traditional optimization algorithms often struggle with complex problems characterized by nonlinearity, multimodality, high dimensionality, and non-convexity. They tend to fall into local optima, with outcomes highly dependent on initial values. To overcome these challenges, many researchers have turned to biological, natural, and social phenomena for inspiration. By analyzing the key characteristics of these phenomena-such as cooperation, information sharing, and specialized behaviors-scholars have developed metaheuristic algorithms that simulate these dynamics, offering more robust solutions for complex optimization tasks. Metaheuristic algorithms have demonstrated significant potential in solving real-world problems and have been widely applied across various domains,including industry1,2, military3,4 , transportation and logistics5,6,7,8, and engineering9,10,11,12.
Meta heuristic optimization algorithms can be mainly divided into four categories: evolutionary based algorithms, swarm intelligence based algorithms, human based algorithms, and physics and chemistry based algorithms Evolutionary algorithms mainly simulate the natural law of survival of the fittest to achieve overall population progress and ultimately solve for the optimal solution. To find the optimal solution to a problem by simulating mechanisms such as genetics, crossover, and mutation in biological evolution. The core idea of swarm intelligence based algorithms is to simulate the collective intelligence of biological populations in order to find the global optimal solution. In this algorithm, each population is regarded as a vivid biological population, and each individual in the population collaborates and cooperates with each other to perform tasks that are difficult to accomplish solely by individual strength. Algorithms based on human behavior mainly draw inspiration from various behavioral patterns in human daily life. These algorithms simulate various human behaviors such as teaching, socializing, learning, emotions, and management. Physics and chemistry based algorithms mainly come from physical rules and chemical reactions in the universe, such as lightning, water waves, sound, gravity, and chemical reactions. In recent years, metaheuristic algorithms have been applied and developed in multiple fields, and among the four types of algorithms, the swarm intelligence algorithm used to simulate biological populations has received the most attention and research from scholars. Due to the lack of the No Free Lunch theorem (NFL), it is known that for all possible optimization problems, no algorithm is superior to other algorithms in all problems. The design of optimization algorithms needs to be adjusted and optimized for specific problems, and there is no universal algorithm that can perform best in all situations.
Constrained optimization problems in engineering design have become a key application area for intelligent optimization algorithms. These problems involve finding the optimal design solutions while satisfying multiple constraints related to technical, economic, and other factors. Yan et al.13 proposed a hybrid genetic algorithm that enhances the algorithm’s ability to escape local optima. Through validation on three practical engineering problems, the study demonstrated the algorithm’s strong convergence accuracy and robustness. Gong et al.14 introduced a multi-objective differential evolution algorithm for addressing engineering design problems and utilized it to solve four engineering design optimization problems. The results demonstrate the efficiency and applicability of the algorithm in multi-objective design optimization. Akay et al.15 proposed an artificial bee colony algorithm for large-scale problems and applied it to optimize five constrained engineering problems, showcasing its superiority in handling large-scale problems.Li et al.16proposed a moth flame algorithm based on the Levy-flight strategy to increase population diversity, enabling the algorithm to effectively escape local optima. The algorithm’s effectiveness was demonstrated in solving two constrained engineering design problems. Sayed et al.17 proposed a hybrid algorithm based on Moth Flame Optimization (MFO) and Simulated Annealing (SA), which was applied to four well-known constrained engineering problems. Experimental results demonstrate the superiority of the proposed algorithm. Furthermore, Chen et al.18 introduced Levy flight and chaotic local search into the Whale Optimization Algorithm, enhancing convergence speed and solution quality for three traditional engineering optimization problems. Rizk-Allah19 presented a multi-orthogonal search strategy combining sine-cosine algorithm with orthogonal search, improving convergence speed and stability. The algorithm was applied to four engineering design problems to study its performance. Yi et al.20 proposed a parallel chaotic local search-enhanced Harmony Search Algorithm for solving engineering design optimization problems. It was applied to constrained engineering design problems, proving its effectiveness. El-Shorb et al.21 combined genetic algorithm with firefly algorithm to prevent falling into local optima while balancing exploration/exploitation trends. The algorithm’s superiority was verified by solving seven engineering design problems. Pathak et al.22 introduced an upgraded Bat Algorithm with Cuckoo Search and Sugeno inertial weights to address the limitations of the Bat Algorithm in solving multi-dimensional engineering problems, such as being trapped in local minima and slow convergence. The proposed algorithm demonstrated higher optimization accuracy and faster convergence when solving seven well-known constrained engineering design problems.
Spider Wasp Optimization (SWO), introduced by Mohamed Abdel Basset et al. in 2023, is inspired by the hunting, nesting, and mating behaviors of female spider wasps in nature. This algorithm boasts advantages such as fast search speed and high solution accuracy. With several unique update strategies, it is well-suited for a wide range of optimization problems requiring different levels of exploration and exploitation. Despite its strengths, SWO has some limitations, including low convergence accuracy, slow convergence speed, and a tendency to become trapped in local optima. To overcome these shortcomings, future research will focus on improving the algorithm’s adaptability and optimization capabilities. The enhanced SWO will be applied to address real-world problems across various fields.
To better suit practical engineering applications and achieve faster convergence to optimal solutions, a Multi-strategy Improved Spider Wasp Optimization is proposed. To address the drawbacks of the original algorithm, such as slow convergence speed and low accuracy in the early stages, the Grey Wolf Algorithm23 s introduced during the initialization phase. This enhancement improves the convergence rate and fitness of the initial population, optimizing the starting state and providing a more advantageous beginning for the entire optimization process, thereby boosting the algorithm’s global optimization capability.
Additionally, to overcome the inefficiency caused by randomly selecting the search range in the original algorithm, an adaptive step size operator24 is incorporated into the search phase. This operator automatically adjusts the search range of spider bees at different optimization stages, enhancing the efficiency and effectiveness of the search process. Simultaneously, Gaussian mutation is introduced to enhance the algorithm’s optimization accuracy and its ability to escape local optima. In the original algorithm, manually setting the trade-off rate (TR) parameter can result in significant variations in optimization outcomes for slightly different TR values. To address this issue, a dynamic selection of the trade-off rate (TR) is implemented, enabling a more flexible trade-off strategy throughout the algorithm’s execution. Finally, a dynamic lens imaging reverse learning strategy is introduced to adjust the parameters of lens imaging in response to reverse solution changes, providing a more accurate target position for the algorithm. This approach reduces the algorithm’s search blindness and significantly enhances optimization accuracy. The proposed algorithm is compared with several of the latest and most frequently cited optimization algorithms. Experimental evaluations on 23 complex benchmark test functions and 7 engineering optimization problems show that it outperforms other algorithms in terms of optimization capability, stability, and applicability to various problems. This demonstrates the algorithm’s excellent performance across a wide range of application fields and its effectiveness in handling complex and diverse problem scenarios.
The main contributions of this paper can be summarized as follows:
1. A Multi-strategy Improved Spider Wasp Optimization algorithm is proposed, enhancing its applicability and development capabilities through the integration of wolf pack initialization, an adaptive step size operator, a dynamic trade-off rate, and a dynamic lens imaging reverse learning strategy.
2. The MISWO algorithm demonstrated superior performance compared to a selection of advanced optimization algorithms across 23 benchmark test functions. Statistical experimental results validate its effectiveness and superiority.
3. The algorithm successfully solved 7 engineering problems of varying complexity in real-life scenarios, confirming its practicality. Comparative analysis using data further illustrates the algorithm’s advancement over selected algorithms.
In summary, this article is divided into the following parts. “Multi-strategy improved spider wasp optimization” enhances Spider Wasp Optimization by integrating multiple strategies to develop a Multi-strategy Improved Spider Wasp Optimization (MISWO) algorithm, and provides a detailed flow of the improved algorithm. “Simulation experiment and analysis” compares the performance of the MISWO algorithm with various advanced algorithms across 23 benchmark test functions and presents the test results. “Engineering optimization problems and analysis” presents seven engineering problems of varying complexity and uses the MISWO algorithm to solve them. Additionally, it includes experimental results from simulations of paired unmanned aerial vehicles navigating disaster-affected mountain maps of different complexities. “Conclusion” summarizes the findings of the article and proposes ideas for future improvements.
Multi-strategy improved spider wasp optimization
This section provides an overview of how the Grey Wolf algorithm is used to enhance the quality of the initial population. It also introduces dynamic inertia weights during the search, chase, and escape stages to dynamically adjust the trade-off rate (TR). Additionally, a reverse learning strategy for dynamic lens imaging is introduced to highlight its mechanisms.
Initialization of the grey wolf algorithm population
Grey wolves, belonging to the canidae family, are top predators and occupy the apex of the food chain. They typically live in packs of 5 to 12 individuals and have a highly structured social hierarchy. In 2014, Mirjalili et al. introduced a new swarm intelligence algorithm inspired by the behavioral characteristics of grey wolves: Grey Wolf Optimization (GWO). GWO mimics the predation behavior of grey wolf packs and achieves optimization through mechanisms of pack cooperation. GWO categorizes wolf packs into four levels based on priority: \(\alpha\), \(\beta\), \(\delta\), and \(\omega\), to carry out a series of activities including searching for prey, surrounding prey, and attacking prey. The \(\alpha\) wolf is the leader of the wolf pack, mainly responsible for making decisions about hunting, sleeping time and place, food allocation, and other aspects of the group. The \(\beta\) wolf is mainly responsible for assisting the \(\alpha\) wolf in making decisions, issuing commands from the \(\alpha\) wolf to other members, and providing feedback on the execution status of other members to the \(\alpha\) wolf, playing a bridging role. When there is a vacancy in the \(\alpha\) wolf of the entire wolf pack, the \(\beta\) wolf will take over the position of the \(\alpha\) wolf. The \(\delta\) wolf follows the decision-making commands of the \(\alpha\) and \(\beta\) wolves and is mainly responsible for reconnaissance, surveillance, and care. \(\alpha\) wolves and \(\beta\) wolves with poor adaptability will also be downgraded to \(\delta\) wolves. \(\omega\) wolves are primarily responsible for maintaining the balance of relationships within their population. The formula for surrounding prey by the wolf pack after identifying it is given by:
Let t represents the iteration number,\(X_p(t)\) denotes the information of the prey,Xp(t) and \(Xp(t+1)\)represent the positions of the grey wolf before and after the update, respectively. D indicates the distance between the prey and the grey wolf.A and C are coefficients used to balance the algorithm’s exploration and exploitation capabilities. The formula is as follows:
\(r_1\) and \(r_2\) are random numbers between [0,1]. t represents the iteration number, \(t_max\) denotes the maximum number of iterations. The wolf pack adjusts its positions under the leadership of the \(\alpha\), \(\beta\), and \(\delta\), wolves to attack the surrounded prey. As shown in Fig. 1:
In each iteration, the best value obtained is assigned to \(\alpha\), \(\beta\), and \(\delta\) Ordinary wolves adjust their positions based on the positions of the leading wolves to complete the surrounding and hunting behaviors. The formula is as follows:
\(D_\alpha , D_\beta\) and \(D_\delta\) respectively represent the distances between the alpha,beta,and delta wolves and other wolves, \(X_1, X_2, X_3\) denote the movement changes that occur in other wolves influenced by the alpha,beta,and delta wolves.
In the initialization phase of SWO, each spider wasp represents a solution. A spider wasp individual with d dimensions can be represented as follows:
A population of size N is randomly generated within the search space [L, U] as follows:
In the search space, arbitrary solutions are randomly generated using the following formula.
Where t represents the generation index, \(i(i=1,2,3, \ldots , N)\) represents the population index, and \(\textbf{r}\) is a vector of d dimensions containing numbers between 0 and 1.
Due to the strong population diversity of the spider wasp optimization (SWO) algorithm, ensuring the search for the optimal solution, its convergence performance is often compromised. To address this limitation, a hybrid strategy leveraging the strong convergence capability of the gray wolf optimization (GWO) algorithm is proposed. This strategy aims to initialize the spider wasp population using the GWO algorithm, which results in a more concentrated initial population distribution and provides clearer optimization directions, thus accelerating the convergence speed of the algorithm. After generating a random population using Equation (27), the SWO algorithm undergoes population updating through the GWO algorithm to obtain a new spider wasp population.
\(\overrightarrow{S W_{\textrm{i}}}(i=1,2,3, \ldots , N)\) epresents the randomly generated initial population, and \(F_l\) denotes the iteration using the gray wolf algorithm for l times. This process results in the initial population of spider wasps.
Adaptive step size operator and dynamic trade-off rate
In the hunting behavior of the Spider Wasp Optimization (SWO) algorithm, two distinct step sizes are randomly employed to explore the solution space: a large-step search to maintain the population’s global search capability, and a small-step search to focus on exploring the vicinity of known solutions. Female wasps persistently explore the search space with a constant large step size in search of suitable spiders for their offspring. This behavior can be mathematically formulated as follows:
Where a and b are two randomly selected indices from the population. \(\mu _1\) controls the direction of movement of the spider wasp. the formula is given by:
\(r_1\) is a randomly generated number within the range [0,1], while rn represents a random number generated from a normal distribution. Additionally, a small step size is employed to explore the area surrounding the dropped spider. The formula is given by:
A randomly selected individual from the population, where C is the index of the randomly selected individual. \(\mu _2\) is a random number generated by the following formula:
l is a randomly generated number between 1 and -2. B is a random number generated by the following formula:
The spider wasp optimization (SWO) algorithm employs two randomly selected search methods to update the population. However, both methods utilize a fixed step size during exploration, which constrains the algorithm’s ability to swiftly identify high-quality solutions in the early stages and impacts its convergence accuracy in the later stages. To overcome this limitation, an adaptive step size operator, denoted as h, is introduced during the search phase. This operator ensures adequate global search capability in the early stages while improving convergence speed in the later stages. The formula for the adaptive step size operator is as follows:
Where t represents the current iteration number, and \(t_{\max }\) denotes the maximum iteration number of the algorithmAs individuals gradually approach the optimal solution region, the search space will shrink, which may cause the algorithm to become trapped in local optima. To mitigate this issue, Gaussian mutation is introduced. The Gaussian mutation strategy utilizes the Gaussian (normal) distribution to generate mutation vectors. This distribution has a higher probability of producing data points near the mean, with the likelihood decreasing as points deviate from the mean. This feature allows the algorithm to conduct a fine search in the vicinity of the current optimal solution, thus improving its local search capability. When the algorithm becomes trapped in a local optimum, Gaussian mutation can guide the algorithm to fine-tune the search within the local region, potentially enabling it to escape from the local optimum. The formulas for incorporating Gaussian mutation into the adaptive step size operator and the search phase are as follows:
Where Gauss refers to a random number sampled from a Gaussian distribution. The adaptive step size operator dynamically adjusts step sizes according to the iteration process, effectively balancing exploration and exploitation throughout the algorithm. In SWO, mating behavior is employed to generate spider wasp offspring, which represent potential solutions for the current generation. The offspring generation formula is as follows:
\(\beta\) and \(\beta _1\) are two numbers generated randomly from a normal distribution, while\(\overrightarrow{v_1}\)and \(\overrightarrow{v_2}\) are generated using the following formulas.
Where a,b,c are indices randomly selected from the population such that \(a=i>b>c\). The hybridization process generates offspring that inherit characteristics from their parents. The trade-off rate (TR) governs the balance between hunting and mating behaviors. The corresponding formula is as follows:
Where F(H) or F(N) represents hunting or nesting behavior, and F(M) represents mating behavior.The selection of the trade-off rate significantly influences the optimization capability of SWO. In SWO, manually adjusting the trade-off rate multiple times with a constant value is inefficient and limits the effective utilization of the algorithm. To address this issue, a nonlinear adaptive step size operator is proposed, which dynamically selects the trade-off rate as the iterations progress. The description of the adaptive step size operator is as follows:
The new balance rate will gradually decrease from 1 to 0 as the iterations progress, while retaining a certain level of randomness.
Dynamic lens imaging reverse learning strategy
Reverse learning computes the reverse solution based on the current population’s positions, expanding the search space. Integrating this strategy with swarm intelligence algorithms can significantly enhance optimization performance and expedite convergence to the optimal solution. However, in later iterations, reverse learning may become less effective at helping the algorithm escape local optima, which could lead to reduced convergence accuracy. Furthermore, the choice of parameters in reverse learning is critical to the algorithm’s overall performance. To address these challenges, a dynamic lens imaging reverse learning strategy is proposed, inspired by the principles of convex lens imaging. As illustrated in Figure 2, within a two-dimensional coordinate system, the search range along the x-axis is defined as [a,b], while the y-axis depicts a scenario influenced by a nonlinear convex lens effect. Consider an object A with a projection x on the x-axis and a height h. The lens effect results in the formation of an image A* on the opposite side after passing through the lens, where the projection of A on the x-axis remains x, but its height is transformed to h*. The formula for lens imaging reverse learning can be articulated as follows:
The above formula can be transformed into:
The parameter\(k=\frac{h}{h^*}\) is used to control the magnitude of the reflection. When k=1, the formula becomes:
This is the formula for the reverse learning strategy.
In lens imaging reverse learning, dynamic variations of the reverse solution are achieved by flexibly adjusting the parameter k, significantly enhancing the algorithm’s optimization precision. When the parameter k takes a small value, the range of generated reverse solutions is larger; whereas when k takes a larger value, the generated reverse solutions are limited to a smaller range. The dynamic adjustment parameter k used in this paper is as follows:
By adjusting the value of k, the size of the solution space explored during reverse learning can be controlled. When k is small, the algorithm is more exploratory, capable of searching potential optimization solutions across a wide solution space. Conversely, when k is large, the focus of reverse learning is more concentrated within a local range, which is conducive to a more in-depth exploration of the details within that local area. The pseudocode for the MISWO algorithm can be found in Algorithm 1, and the corresponding flowchart is shown in the following Fig. 3. To address the shortcomings of the original Spider Wasp Optimization (SWO), several improvement methods have been introduced to enhance algorithm performance. The process begins with the initialization of the MISWO parameters. Unlike random initialization, the Grey Wolf Algorithm is used to set up the population, ensuring better initial conditions. The trade-off rate is determined using the dynamic adaptive step size operator, which helps to choose between hunting and nesting behavior or mating behavior. When hunting and nesting behavior is selected, the search stage is managed by the adaptive step size operator and enhanced with Gaussian mutation. On the other hand, if mating behavior is chosen, the population is updated accordingly. After the population update, the lens imaging strategy is applied to map the population and generate the inverse solution, which is then selectively preserved using a greedy strategy. The fitness values of the population are calculated and stored in memory. To further enhance efficiency, the population size is reduced, speeding up the optimization process. The algorithm iterates until the optimal value is reached. If the stopping condition is met, the final output is the optimal solution; otherwise, the algorithm continues to iterate.
Simulation experiment and analysis
Algorithmic procedure of MISWO
To validate the overall performance of the Multi-strategy Improved Spider Wasp Optimization (MISWO) algorithm, this section evaluates its effectiveness on 23 fundamental benchmark functions commonly used in the literature25. These functions cover a range of dimensions, from low to high, and include both single-peak and multi-peak scenarios, as detailed in Table 1. The MISWO algorithm is compared with several state-of-the-art and widely cited optimization algorithms, including the Arithmetic Optimization Algorithm (AOA)26, Harris Hawk Optimization (HHO)27, Whale Optimization Algorithm (WOA)28, Grey Wolf Optimization (GWO), White Shark Optimizer (WSO)29,Coronavirus herd immunity optimizer(CHIO)30, Particle Swarm Optimization (PSO), and the original Spider Wasp Optimization (SWO) algorithm22. To ensure a fair and objective comparison, all algorithms are executed under the same software and hardware conditions, specifically using a Windows 10 operating system and the MATLAB R2023a programming environment. The evaluation begins with high-dimensional single-peak functions (F1-F7), which have only one global solution and no local minima. These functions assess the exploitation capability of MISWO. For these functions (F1-F5), a population size of 30 and a maximum of 4000 iterations are used for each algorithm, which is fewer iterations compared to the original algorithm’s count of\(\left( 5 \times 10^4\right)\). For F6-F23, the maximum iteration count remains at \(5 \times 10^4\) to reduce randomness interference, and each algorithm is run independently for 30 times. The results are shown in Table 2.
The data presented in Table 2 shows that MISWO consistently achieves the optimal solution for benchmark functions F1 to F4, with both its average and worst fitness values being the best among the compared algorithms. Moreover, MISWO converges to the optimal solution more quickly than the original algorithm, requiring fewer iterations. For F5, although MISWO’s performance does not surpass HHO, it still outperforms the original algorithm and most other optimization algorithms in terms of optimal, average, and worst fitness values. Additionally, MISWO delivers the best overall performance for F6 and F7, achieving the highest accuracy in average, optimal, and worst fitness values across these functions. Functions F8 to F13 are multimodal, containing multiple extrema, and are designed to test the algorithm’s ability to escape local minima and locate global optima. As seen in Table 2, HHO achieves the best, average, and worst fitness values for F8, but MISWO still outperforms the remaining algorithms. For F9 to F11, MISWO finds the optimal solution with both the worst and average values also being optimal. In F12, while MISWO does not find the optimal solution, its results surpass all algorithms except for PSO and WSO. For F13, MISWO achieves the optimal solution for the best, worst, and average values. Finally, F14 to F23 are fixed-dimensional multimodal functions used to evaluate the algorithm’s performance in fixed dimensions and further assess its adaptability. MISWO consistently secures the best, average, and worst fitness values for F14 to F20, demonstrating strong consistency. For F21, MISWO finds the optimal value, though its worst value is not optimal. However, in F22 and F23, MISWO continues to deliver optimal and consistent best, average, and worst fitness values.
Based on the results of the test functions, MISWO consistently demonstrates near-globally optimal performance across single-peak functions, multi-peak functions, and fixed-dimensional multi-peak functions. The outcomes from the single-peak functions reveal MISWO’s robustness and strong exploration capabilities. By enhancing the original algorithm’s exploration ability, MISWO also accelerates convergence, allowing it to locate the optimal solution more rapidly. The results from the multi-peak functions highlight MISWO’s capacity to escape local optima and achieve global optimization. When compared to other algorithms and the original version, MISWO consistently delivers superior optimal, average, and worst fitness values. Moreover, the fixed-dimensional multi-peak function results further validate MISWO’s exceptional adaptability, stability, and problem-solving capabilities, as it maintains optimal fitness values and consistency across various test functions.
Convergence curve analysis
Convergence curves visually depict an algorithm’s convergence speed, accuracy, and optimization capabilities, providing a straightforward comparison across different functions. These curves help assess the algorithm’s efficiency and its ability to search for optimal solutions within the solution space. Presented here are the convergence curves for the 23 test functions discussed earlier. The number of iterations and parameters used are consistent with those in the previous section. For better clarity, some individual graphs are magnified. In certain cases, curves may overlap or show substantial differences in results, making it difficult to display all curves distinctly.
The convergence curves of the F1–F23 test functions are illustrated in Fig. 4. A more intuitive analysis of these curves reveals several key insights: for the unimodal functions F1–F4, although most algorithms, aside from the original one, can achieve the optimal solution, MISWO consistently converges to this optimal solution at the fastest rate. Additionally, aside from F5 and F7 where HHO achieves the optimal value, and F6 where PSO finds the optimal solution, MISWO still demonstrates the highest convergence speed, securing suboptimal solutions.
In multimodal functions, MISWO shows superior optimization performance, particularly in F8, where its optimization ability surpasses that of other algorithms. For F9–F11, MISWO’s convergence speed and accuracy far exceed those of other algorithms. In F12–F13, MISWO achieves optimization results that are several orders of magnitude better than those of other algorithms, including the original. Moreover, compared to the original algorithm, MISWO demonstrates significant improvements in both convergence speed and optimization ability. For the fixed-dimensional multimodal functions, some graphs are enlarged for better analysis. In F14–F15, although MISWO performs slightly slower in terms of convergence speed, it still reliably finds the optimal solution with excellent precision. In F16, MISWO shows fast convergence while also obtaining the optimal value. For F17–F19, the differences among the algorithms are minimal. For F20–F23, MISWO’s convergence speed is only marginally slower than that of WOA. Overall, these observations validate the effectiveness of MISWO across various test functions, showcasing its superior convergence speed and optimization capability in most scenarios.
Analysis of cec test function results
To further evaluate the performance of the Multi-Strategy Improved Spider Wasp Optimization (MISWO), this section employs the CEC2022 function test set for analysis. The CEC2022 test set consists of 12 single-objective test functions with boundary constraints, including unimodal functions (F1), multimodal functions (F2–F5), mixed functions (F6–F8), and combination functions (F9–F12). The testing dimensions are 2D, 10D, and 20D. MISWO and various algorithms discussed in the previous section were independently run 30 times on the CEC2022 function test set. Tables 3, 4, and 5 present the results for algorithms operating in 2D, 10D, and 20D, including optimal, average, and worst values for each algorithm.
In the case of the unimodal function (F1), MISWO achieved optimal values across all testing dimensions of 2D, 10D, and 20D, with the optimal, average, and worst values being identical. For multimodal functions (F2–F5), MISWO attained optimal values under 2D conditions, where the optimal, average, and worst values were also the same. In 10D, MISWO secured optimal values in F2, F3, and F5, again with consistent optimal, average, and worst values. For F4, MISWO outperformed other algorithms by achieving the optimal value, with both average and worst values also being optimal. In 20D, MISWO exhibited superior performance compared to other algorithms.
For the mixed functions (F6–F8), MISWO consistently outperformed the other algorithms. Regarding the combination functions (F9–F12), except in the 20D case where the results for F10 were slightly inferior to those of CHIO, MISWO achieved the best results across the remaining dimensions.
Engineering optimization problems and analysis
Engineering problems are often characterized by complexity, involving numerous variables and constraints, which makes them difficult to solve. To demonstrate the effectiveness of the MISWO algorithm, several engineering optimization problems are selected for evaluation. These include the design of a tension/compression spring, pressure vessel, and welding beam-problems with fewer constraints and parameters. Additionally, more complex challenges such as speed reducer design, which includes multiple constraints, and automobile side-impact design are also examined. MISWO’s performance will be compared to that of several other algorithms, including the original SWO, in terms of both optimization results and convergence behavior. This comparison aims to thoroughly investigate MISWO’s stability and optimization capability in addressing complex engineering optimization problems.
Tension/compression spring design
The tension/compression spring design31, as depicted in Fig. 5, aims to minimize the mass while satisfying constraints such as minimum deflection, shear stress, oscillation frequency, and outer diameter limits. It involves three design variables: the wire diameter (d), the mean coil diameter (D), and the number of active coils (P). The mathematical model is as follows:
The boundary constraint conditions are as follows:\(0.05 \le d \le 2\), \(0.25 \le D \le 1.3\), \(2 \le P \le 15\).The optimization results for each algorithm are shown in Table 6. The results indicate that MISWO outperforms other algorithms in the design of tension/compression springs. When d=0.0517652, D=0.3585528, P=11.1821947 the minimum mass of the spring is achieved at 0.0126653. Additionally, the convergence curves of MISWO and other algorithms in the design of tension/compression springs are shown in Fig. 6. From the convergence curves, it can be observed that MISWO not only obtains the optimal value but also converges faster compared to other algorithms.
Pressure vessel design
Pressure vessel design32 The objective, as shown in Figure 7, is to minimize costs while meeting production requirements. This includes 4 design variables: shell thickness (\(T_s\)),head thickness (\(T_h\)), inner radius of the container (R), and the length of the container excluding the head (L). The mathematical model is as follows:
The boundary constraints are as follows:\(0.51 \le R \le 99.49\), \(0.51 \le L \le 99.49\), \(10 \le T_s \le 200\), \(10 \le T_sh \le 200\)The optimization results for each algorithm shown in Table 7. When \(T_s\)=13.4311960, \(T_h\)=6.8949393, R =42.0984455 L=176.6365958 MISWO is able to find the minimum value of 6059.714335. The results indicate that MISWO outperforms other algorithms in pressure vessel design optimization. Additionally, the convergence curves of MISWO and other algorithms in pressure vessel design are shown in Fig. 8. It is evident from the convergence curves that MISWO achieves better convergence speed and accuracy compared to other algorithms.
Cantilever beam
Cantilever beam33 shown in Fig. 9 is a structural engineering design problem related to the weight optimization of a cantilever beam with a square cross-section. One end of the cantilever beam is rigidly supported, and it is subjected to a vertical force at the free node of the cantilever. The cantilever beam consists of 5 hollow square blocks with constant thickness, where the height (or width) of the blocks is the decision variable, and the thickness is fixed. The mathematical formulation of this problem is as follows:
The boundary constraints are as follows:\(0.01 \le h_1 \le 100\), \(0.01 \le h_2 \le 100\), \(0.01 \le h_3 \le 100\), \(0.01 \le h_4 \le 100\),\(0.01 \le h_5 \le 100\) MISWO obtained the optimal solution for the cantilever beam design problem as shown in Table 8 when \(h_1\)=6.0132964, \(h_2\)=5.3091224, \(h_3\)=4.4981706,\(h_4\)=3.5019521,\(h_5\)=2.1511353 ,resulting in an optimal value of 1.3399574. Additionally, the convergence curves of MISWO and other algorithms for solving the cantilever beam design problem are depicted in Fig. 10. From the convergence curves, it is evident that MISWO not only achieves the optimal value but also converges faster than other algorithms.
Piston lever
As shown in Fig. 11, the main objective of the Piston Lever34is to minimize the oil volume when the piston rod moves from \(0^{\circ }\) to \(45^{\circ }\). This is achieved by arranging the piston components H B D X appropriately. The mathematical model for the piston lever design problem is as follows:
The boundary constraints are as follows:
Table 9 displays the optimization results for the Piston Lever design problem, where MISWO obtained the optimal solution compared to other algorithms. Additionally, the convergence curve in Figure 12 indicates that MISWO has excellent convergence speed and accuracy.
Speed reducer design
The primary objective of the Speed reducer design problem35 is to minimize its weight while satisfying 11 constraints. There are seven variables involvedb, gear modulemnumber of teeth on the pinio:z length of the first shaft between bearings \(l_1\) length of the second shaft between bearings \(l_2\)diameter of the first shaft \(d_1\) and diameter of the second shaft \(d_2\) As shown in Fig. 13, solving for these variables yields the weight of the speed reducer.
The boundary constraints are as follows:
The mathematical model for the speed reducer design problem is as follows:
\(\begin{aligned} \min f(x)=&0.7854 b m^2\left( 3.3333 z^2+149334 z-43.0934\right) -1.508 b\left( d_1 ^2+d_2 ^2\right) +7.4777\left( d_1 ^3+d_2 ^3\right) \\ &+0.7854\left( l_1 d_1 ^2+l_2 d_2 ^2\right) \end{aligned}\)
The speed reducer design problem was solved using various algorithms, and the values of each parameter are presented in Table 10. According to the data in the table, the minimum value of the problem was obtained by MISWO, and the values of the core parameters were optimized. This effectively reduced the cost of engineering design. Additionally, from the convergence curves of the algorithms shown in Fig. 14, it can be observed that MISWO exhibits superior convergence speed and accuracy compared to other algorithms, leading to improved optimization results.
Car side impact design
In consideration of the collision force and stress on the car, minimize the weight of the door using nine parameters36. The variables are the thickness of the B-pillar inner panel \(x_1\) Reinforcements for the B-pillar \(x_2\) Thickness of the floor inner side \(x_3\) Crossbeam \(x_4\) Door beam \(x_5\) Door beltline reinforcement \(x_6\) Roof longitudinal beam \(x_7\) B Inner side of the B-pillar \(x_8\) Inner side of the floor? \(x_9\) Guardrail height \(x_10\) Collision position \(x_11\). As shown in Fig. 15, the mathematical model for the car side collision design problem is as follows?
The boundary constraints are as follows:
The optimization results for the car side collision design problem, obtained through various algorithms for 11 variables, are presented in Table 11. From the data in the table, it is evident that MISWO consistently provides the optimal solution compared to other algorithms when faced with multi-parameter engineering problems involving 11 variables. This demonstrates that MISWO exhibits excellent adaptability and robustness in solving multi-constraint, multi-variable engineering problems. An analysis of the convergence curves of MISWO and other algorithms, as shown in Fig. 16, reveals that MISWO converges faster towards the optimal solution in the early stages and achieves the optimal solution ahead of other algorithms as the number of iterations increases. This indicates that MISWO excels in both optimization capability and convergence speed.
UAV trajectory planning
Unmanned Aerial Vehicle (UAV) is an unmanned aircraft controlled by radio remote control equipment and self-contained program control devices. It can operate remotely or autonomously, allowing ground personnel to pre-program, monitor, and control it.With the rapid development of UAV technology, the application scenarios and flight environments of UAVs are becoming increasingly diverse and complex. Particularly in harsh and uncertain conditions, UAV flights may face severe challenges from various obstacles and potential threats. Enhancing UAV autonomy and reducing human intervention are crucial goals, and UAV flight path planning is a key means to achieve these objectives. UAV trajectory planning aims to navigate from a designated starting point to a target area within specific mission terrain and environment, following constraints such as flight speed, altitude, range, and time, while avoiding restricted zones like bad weather areas, radar threat zones, and no-fly zones. When performing flight missions, UAVs navigate from a predetermined starting point through complex mountainous environments to reach designated task points. During the flight, they encounter various threats, including mountain collisions and weather-related hazards. Effective modeling of the environment and terrain is fundamental to improving path planning efficiency. First, the basic reference terrain is modeled. The model for the reference terrain is as Eq.36:
In the formula,\(Z_{1(x, y)}\)represents the height value corresponding to a point on the horizontal plane, where x and y are the coordinates of the projected point on the horizontal plane.The parameters a,b,c,d,e,f and gcontrol the undulations of the reference terrain in the established map. The natural mountainous terrain encountered during flight missions is represented by the following Eq.37:
Flight altitude constraint:
L represents the lower limit of flight altitude, H represents the upper limit of flight altitude. Drones are not allowed to expand with obstacles.The obstacle area is \(O_i\)(i is the obstacle number), and (x, y, z) represents the position of the drone.
Drones are subject to speed constraints during flight.\(v_{\max }\) represents the maximum speed, \(v_{\min }\) represents the minimum speed, and \(\dot{\textbf{p}}=(\dot{x}, \dot{y}, \dot{z})\) represents the velocity vector of the drone. The mathematical formula for the speed constraint of unmanned aerial vehicles is as follows.
The flight time of drones is constrained by the energy they carry, and can be approximated by limiting the total length or total time of the flight path.Assuming the maximum flight time of the drone is \(T_{\max }\), the actual flight time is T, and \(v_{\text{ avg } }\) represents the average speed. Obtain the endurance constraint equation.
The flight of drones is also limited by their dynamic characteristics, such as maximum turning angular velocity, maximum acceleration, etc. These constraints typically involve more complex dynamic equations, such as:
\(\varvec{a}_{\max }\) is the maximum acceleration. The result of using MISWO to solve this problem is shown in Fig. 17, and the convergence curve is shown in Fig. 18.
In summary, the proposed MISWO algorithm demonstrates superior performance when tackling various real-world engineering problems of different complexities. Through comparisons with other algorithms and handling of different parameters, MISWO consistently outperforms the original SWO algorithm as well as several state-of-the-art and widely referenced optimization algorithms in terms of cost efficiency for solving practical engineering problems. This highlights MISWO’s excellent solving capability, convergence speed, stability, and practicality when addressing real-world optimization challenges (Figs. 19, 20).
Conclusion
The paper proposes an improved Spider Wasp Optimization (MISWO) algorithm utilizing a hybrid strategy to enhance global optimization performance. In the initialization phase, the Grey Wolf Optimizer (GWO) is integrated to boost the algorithm’s convergence rate and improve the initial population’s fitness, resulting in better early-stage performance. During the search phase, an adaptive step size operator is employed to dynamically adjust the spider wasp’s search range, improving optimization precision and aiding in escaping local optima. Additionally, a dynamic trade-off rate (TR) is introduced to further optimize performance. A dynamic lens imaging reverse learning strategy is also used to generate reverse solutions, update optimal individuals, and prevent the algorithm from stagnating in local optima. A greedy strategy is applied to select the best individuals, enhancing the algorithm’s ability to escape local traps and accelerate convergence.
The MISWO algorithm is tested on 23 complex benchmark functions with various characteristics, and its convergence behavior is analyzed. The Modified Evolutionary Optimization (MEO) algorithm is then applied to eight complex engineering design problems, comparing MISWO’s performance against the original SWO and several state-of-the-art optimization algorithms. Experimental results demonstrate that MISWO possesses strong optimization capabilities and holds significant practical value in solving real-world challenges.
However, the current research has limitations. MISWO’s performance in large-scale and more complex engineering scenarios, especially multi-objective engineering optimization problems, requires further improvement. Future research will focus on refining MISWO, expanding its application to more diverse engineering fields, and conducting more realistic simulations.
Data availability
The datasets used and/or analysed during the current study available from the corresponding author on reasonable request.
References
xiao, J. & Xue, Y. Optimization of airflow distribution in mine ventilation networks using the modified sooty tern optimization algorithm. Min. Metallur. Explor. 41, 239–257 (2024).
Bouach, A. & Benmamar, S. Management of a water pumping schedule by an hgma optimization algorithm. Iran. J. Sci. Technol. Trans. Civ. Eng. 47, 4031–4043 (2023).
Luo, J., Yang, Y. & Wang, Z. Localization algorithm for underwater sensor network: A review. IEEE Internet Things J. 8, 13126–13144 (2021).
Liu, Y., Gan, X. & Sun, Z. Terminal area capacity assessment under military activities based on improved genetic algorithm (2020). In 2nd IEEE International Conference on Civil Aviation Safety and Information Technology (ICCASIT), New York. 341–344 (2020).
Du, P., He, X. & Cao, H. Ai-based energy-efficient path planning of multiple logistics uavs in intelligent transportation systems. Comput. Commun. 207, 46–55 (2023).
Cai, L. Decision-making of transportation vehicle routing based on particle swarm optimization algorithm in logistics distribution management. Cluster Comput.- J. Netw. Softw. Tools Appl. 26, 3707–3718 (2023).
Lu, Y. & Li, S. Green transportation model in logistics considering the carbon emissions costs based on improved grey wolf algorithm. Sustainability 15, 11090 (2023).
Kang, L. Research on marine port logistics transportation system based on ant colony algorithm. J. Coastal Res. 64–67 (2020).
Lu, Y. & Li, S. Comparison of three novel hybrid metaheuristic algorithms for structural optimization problems comparison of three novel hybrid metaheuristic algorithms for structural optimization problems. Comput. Struct. 244, 106395 (2021).
Kashani, A. R., Camp, C. V. & Akhani, S., Mohsen & Ebrahimi. Optimum design of combined footings using swarm intelligence-based algorithms. Adv. Eng. Softw. 169, 103140 (2022).
Goodarzian, F., Ghasemi, P. & Kumar, V. A new modified social engineering optimizer algorithm for engineering applications. Soft Comput. 26, 4333–4361 (2022).
Barua, S. & Merabet, A. Levy arithmetic algorithm: An enhanced metaheuristic algorithm and its application to engineering optimization. Expert Syst. Appl. 241, 122335 (2024).
Yan, X., Liu, H., Zhu, Z. & Wu, Q. Hybrid genetic algorithm for engineering design problems. Cluster Comput. 20, 263–275 (2017).
Gong, W., Cai, Z. & Zhu, L. An efficient multiobjective differential evolution algorithm for engineering design. Struct. Multidiscipl. Optim. 38, 137–157 (2009).
Akay, B. & Dervis., K. Artificial bee colony algorithm for large-scale problems and engineering design optimization. Cluster Comput. 23, 1001–1014 (2012).
Sayed, G. & Hassanien, A. E. A hybrid sa-mfo algorithm for function optimization and engineering design problems. Complex Intell. Syst. 4, 195–212 (2018).
Chen, H., Xu, Y. & Wang, M. A balanced whale optimization algorithm for constrained engineering design problems. Struct. Multidiscipl. Optim. 71, 45–59 (2019).
R.-A. R. M. Hybridizing sine cosine algorithm with multi-orthogonal search strategy for engineering design problems. J. Comput. Des. Eng. 5, 249–273 (2018).
Yi, J., Li, X. & Chu, C.-H. Parallel chaotic local search enhanced harmony search algorithm for engineering design optimization. J. Intell. Manuf. 30, 405–428 (2019).
El-Shorbagy, M. A. & El-Refaey, A. M. A hybrid genetic-firefly algorithm for engineering design problems. J. Comput. Des. Eng. 9, 706–730 (2022).
Pathak, V. K. & Srivastava, A. K. A novel upgraded bat algorithm based on cuckoo search and sugeno inertia weight for large scale and constrained engineering design optimization problems. Eng. Comput. 38, 1731–1758 (2022).
Abdel-Basset, M., Mohamed, R., Jameelt, M. & Abouhawwash, M. Spider wasp optimizer: A novel meta-heuristic optimization algorithm. Artif. Intell. Rev. 56, 11675–11738 (2023).
Li, W. & Wang, G.-G. Elephant herding optimization using dynamic topology and biogeography-based optimization based on learning for numerical optimization. Eng. Comput. 38, 1585–1613 (2022).
Wang, Z. & Huang, X. Chimp optimization algorithm. Expert Syst. Appl. 149, 113338 (2020).
Heidari, A. A., Mirjalili, S., Faris, H. & Aljarah, I. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 97, 849–872 (2019).
Yildirim., E. & Karci, A. Application of three bar truss problem among engineering design optimization problems using artificial atom algorithm (2018). In 2018 International Conference on Artificial Intelligence and Data Processing (IDAP). 978-981-10-0448-3.
Tzanetos, A. & Blondin, M. A qualitative systematic review of metaheuristics applied to tension/compression spring design problem: Current situation, recommendations, and research direction. Eng. Appl. Artif. Intell. 118, 105521 (2023).
Mirjalili, S. & Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 95, 51–67 (2016).
Braik, M., Hammouri, A. & Atwan, J. A novel bio-inspired meta-heuristic algorithm for global optimization problems. Knowl. -Based Syst. 243, 108457 (2022).
Dalbah. & Mohammad., L. A modified coronavirus herd immunity optimizer for capacitated vehicle routing problem. J. King Saud Univ. -Comput. Inf. Sci. 34.8, 4782–4795 (2022).
Gao, F., Liu, G., Wu, X. & Liao, W.-H. Optimization algorithm-based approach for modeling large deflection of cantilever beam subject to tip load. Mech. Mach. Theory 167, 104522 (2022).
Pathak, V. K. & Srivastava, A. K. A novel upgraded bat algorithm based on cuckoo search and sugeno inertia weight for large scale and constrained engineering design optimization problems. Mech. Mach. Theory 167, 104522 (2022).
Shabani, A. & Asgarian, B. Search and rescue optimization algorithm: A new optimization method for solving constrained engineering optimization problems. Expert Syst. Appl. 161, 113698 (2020).
Gandomi, A. H. & Alavi, A. H. Cuckoo search algorithm: A metaheuristic approach to solve structural optimization problems. Eng. Comput. 29, 17–35 (2013).
Yuan, R. & Li, H. An enhanced Monte Carlo simulation-based design and optimization method and its application in the speed reducer design. Adv. Mech. Eng.. Figshare. https://doi.org/10.1177/1687814017728648.
Sadeeq, H. T. & Abdulazeez, A. M. Car side impact design optimization problem using giant trevally optimizer. Structures 55, 39–45 (2023).
Author information
Authors and Affiliations
Contributions
J.S, and Z.T. wrote the main manuscript text and prepared all figures and tables. Z.W were responsible for the data curation and software. All authors reviewed the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Sui, J., Tian, Z. & Wang, Z. Multiple strategies improved spider wasp optimization for engineering optimization problem solving. Sci Rep 14, 29048 (2024). https://doi.org/10.1038/s41598-024-78589-8
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598-024-78589-8