In the original Particle Swarm Optimization (PSO) algorithm, there were several parameters that needed to be tuned in order for the algorithm to converge quickly to the correct result. The original velocity update equation was given by:

v = ωv + c1 rand (personal best – current) + c2 rand (global best – current)

Here, the inertial weight ω and the randomness coefficients c1, and c2 needed to be tuned. It was soon realized that values of these parameters that work well when the algorithm begins running, to find something close to the optimal solution, slows the algorithm down later when it is trying to hone in on the optimal solution, and vice versa.

Several attempts were made to remedy this situation by adapting the parameters as the algorithm progresses. Early attempts to deal with this issue used the number of iterations as an indicator of how to adjust the parameters. The problem with this is that some functions lead to quick convergence, while others cause slow convergence, so the algorithm would need to be tuned based on the function. Instead, we can let the particles themselves tell us what the state of convergence is and how to adjust the parameters. In this vein, let di be the average distance from the i-th particle to the rest of the swarm and let

$f = \frac{d_g - d_{min}}{d_{max}-d_{min}}$

where dg is di for the global best particle, dmin is the smallest of the di, and dmax is the largest of the di. If ƒ is less than 0.2, the system is in a state of convergence with the particles searching for minor improvements in the region around the global best. If ƒ is between 0.2 and 0.5, then the system is in a state of local convergence with particles searching around their personal best for improvement. If ƒ is between 0.5 and 0.8, then the system is in a state of exploration and the particles are looking for regions in which optima may be located. If ƒ is greater than 0.8, then the system is in a state of escape and the swarm is being pulled out of a local minimum by a particle that happened to find a better value at a distance away from the swarm.

From a determination of which state we are in, we can choose how to adjust the parameters in order to speed up convergence and maximize the likelihood of finding the correct answer. For example, when we detect that we are in a state of escape, we can increase c2 and decrease c1so that the swarm is pulled out of the local optima more quickly.