Complete Communications Engineering

Some of the most interesting and challenging problems in mathematics and engineering are optimization problems. Generally, these can be stated as find the number, vector, or matrix x that gives the maximum value of a known function ƒ(x) There are some relatively simple functions ƒ for which these problems are easily solved. For other functions, we have basically one method to solve this problem: evaluate the function at many points and hope that we find the best one. This method has many variants that all involve picking points at which to evaluate ƒ in a smart way. We shall examine one particular method that is an outgrowth of studying social behavior.

In 1995, J. Kennedy and R. Eberhart built upon earlier investigations that suggested that members of a group can benefit not only from their individual memories, but also from the collective memory of the entire group. This is the underlying concept behind Particle Swarm Optimization. We begin with several randomly selected particles at points of the form  = <x1x2,…,xn> and each has a randomly selected velocity  = <v1v2,…,vn>. Each particle has three competing forces acting on it: inertia, its own memory, and the group memory. This can be expressed as

vi ≔ vi + rand*2*(PersonalBest xi – current xi) + Rand*2*(GlobalBest xi – current xi)

Note that both rand and Rand represent randomly selected numbers in the range [0,1]. This adds some variability which allows the particles to move into unexplored areas.

The first term in the equation represents inertia, i.e. the tendency for the particle to maintain its current velocity. This prevents the latter terms from exerting too much influence early on. The second term represents personal memory, in that the particle is drawn back towards the best position that it has ever occupied. The third term represents the group memory, in the sense that the particle is drawn towards the best position that any particle in the group has every occupied. This algorithm relies on the fact that at first, particles will tend to move around erratically exploring large areas. Then, the particles will tend to swarm around the best option found so far. This will allow the area around the current optimum to be searched more thoroughly.

Particle Swarm Optimization shows much promise for the future. It is already a fast algorithm that is comparable to some optimization techniques and far faster than many others. Additionally, it is easier to code and requires much less storage space than many other optimization algorithms. It is believed that the method can be made even faster by fine tuning the parameters. The current trend in research is to replace the two instances of 2 in the defining equation by other weights that are dynamically optimized in different ways so as to increase the speed of convergence.

To learn more about Particle Swarm Optimization, check out the following links: