

This is a series of experiments started from training a model based on StyleGan2 - fed with two thousand randomly generated images from a p5.js sketch - using RunwayML's transfer learning technique. The model (I call her jelly) was able to make out basic shapes after a minimal amount of training and started generating interesting images on its own, so I decided to stop there and give it more freedom toggling a few numbers to see what kind of abstract outputs it will surprise me with. The videos show a few animations generated with image outputs using latent space walk from jelly, another model trained with a different p5.js generated dataset and a mixed model of them.