Have you ever wondered what would happen if you took an MRI scan of a person’s skull, created a 3D model of it then passed that over to an archaeological artist who can recreate a face from the underlying bone structure? Would the reconstructed face look like the original?
It uses a genetic algorithm to generate an image of random polygons on a canvas. This particular technique was first implemented by Roger Alsing in 2008 and implemented in JS by Jacob Siedelin and AlteredQualia separately. There was also a rather fun variation by Mario Klingemann which used polygon evolution to allow encoding an image into a tweet.
A genetic algorithm is a technique to allow a particlar configuration of variables to ‘evolve’ from a given starting configuration. At its most basic, there are two stages – a random mutation of the initial configuration followed by a ‘fitness’ test. This fitness check compares the new, mutated configuration against a predetermined desired output. If it scores better than the previous generation, it is used as the basis for a new generation of mutations. This is a big simplification of the process but it covers the general idea. The benefit of using genetic algorithms is that it allows a complex configuration of variables to be developed from a basic ruleset.
The fitness of each generation can be measured in various ways depending on what it is you’re trying to accomplish. In the original Mona Lisa experiment, the fitness check was simply how well the output matched an image of the Mona Lisa. A similar technique can been used in game development to generate realistic computer-controlled players. In car racing games, for example, fitness can be determined by how far and how fast a computer-controlled car goes round the track. A given simulation may allow mutation of variables around speed, steering and awareness of the road and nearby cars. The first few iterations will most likely not go anywhere, the next few may drive perfectly well in a straight line and straight off the edge of the track at the first bend. Leave the simulation learning for long enough, however, and you’ll generate a racing algorithm capable of driving flawlessly around the track, avoiding obstactles and crashes with ease.
What makes Pareidoloop particularly interesting case is that the fitness of each generation is determined by how well a separate face-recognition algorithm recognises the image as a face. This is different from the previous image generation projects as there is no definitive target against which to compare. It will stop when the fitness test determines the image looks enough like a face. The upshot of this is that Pareidoloop, when left to its own devices, will just keep outputting random face-like images as long as you let it.
Many of my generated faces ended up reminding me of Anthony Hopkins. I’m not sure if that says something about the algorithm or me, however. If you generate any well-known faces, why not post them to our Facebook page?