I was playing around with a convolutional autoencoder, and trained it to overfit on this picture of a puppy to make sure nothing was broken. I thought the end result looked pretty cool, you can watch the neural network learn to reproduce this single image of a dog in various steps, starting with the overall shape and then narrowing down to coloring specific regions, and finally really fighting to repair some strange artifact in the lower right. Nothing special here really other than a visualization of overfitting in action, but it certainly looks interesting :)
This second videos shows the same kind of process in action, but for a convolutional autoencoder with skip connections. Not exactly sure why overfitting in this way leads to "dark holes" appearing during training, but we see the last thing the network learns is to heal this scar in the reconstruction. An interesting effect after the addition of the skip connection is the "whole image" flickering effects seen in the other sequence are replaced with the flickering of individual rounded patches. I think this flickering results from the batch normalization coeffecients deeper in the bottleneck varying and affecting the entire image, whereas with skip connections the effect is somewhat mitigated.
This is kind of the same grid but on the last layer I add a meshgrid to the normal output... nothing special really but it kind of led to a cool looking rainbow background, because it was difficult for the CNN to elimnate this effect on the last layer given the uniformity of the backdrop I suppose? A very inefficient method to replace featureless backdrops with something new!