Solving complex problems with Particle Swarm Optimization
In nature we see flocks of birds creating interesting patterns in the sky, similarly to shoals of fishes showing to those in the water. But what if I told you that you can use this behaviour to train a neural network, or even find the global optimum of a very complex function.
As the title suggest we are going to use a particle swarm optimizer of achieve this behaviour in Python. To get a good understanding of how the meta-heuristic algorithm works, I have implemented the code in easy to follow steps myself. The code can be found on my github.
The theory
Lets first start with a little bit of theory about the particle swarm optimizer (PSO) algorithm. The algorithm makes use of different particles, with each its own position (X) and its own speed (V) for each iteration, it will update its speed and position on the fitness landscape. Next to the speed and the position each particle keeps track of its own best location (Pbest) and the global best location of the swarm (Gbest). How the speed and the position are updated can be seen in the formula in the image.
The update of the speed can be dissected into three parts, the first part is the particles own speed, which is then scaled with the Inertia weight. The second part is weighing the influence of the Pbest and finally the Gbest is taken into account of computing the new speed.
The two acceleration coefficients determine the influence of the Pbest and the Gbest and considered to be important hyper parameters for the PSO algorithm. Finally the inertia weight is scaled based on the number of iterations the algorithm will run.
The implementation
Now that we have shortly discussed the theory behind the PSO algorithm, we can delve into its implementation.
Line 2 to 10 define the structure of the search together with the hyper parameters for the particle swarm. The algorithm generally runs very fast we can therefore be given quite a few particles.
On line 35 to 49 we can see the initialization of the particles, we initialize the particles between the problem boundaries, which we defined when creating the PSO object, furthermore we calculate the initial global optimum of swarm.
Finally we can take a look and the main optimization loop, line 18 to 33. On line 21 and 22 we calculate the cognitive and social component respectively, we then update the speed by applying the update rule as defined by the formula by combining each components correctly. Next we update both the person best fitness and the global fitness. Finally we update the location and speed component respectively.
Examples
As a bonus we will take a look at the particle swarm algorithm to work on some example problems. We will take a look at very complex function with a flat fitness landscape and many local minima traps and show the PSO algorithm solving the XOR problem correctly. The code for the examples can be found in my repository. For the XOR problem we have the particle swarm find the optimal weights for a neural network.
Conclusion
We have learned that we can use a concept from nature, flock behaviour, to find solutions for very complex problems with relatively few lines of codes. It will be interesting to look into the further capabilities of nature based search algorithms, like evolutionary algorithm are ant colony optimization.