Google is in the news this week with their trippy neural net visualizations they call inceptionism.
Inception was Google’s code name for a 22-layer deep convolutional neural network described in Christian Szegedy et al., Going Deeper with Convolutions, http://arxiv.org/abs/1409.4842 .
However, the idea for generating the trippy visualizations seems to have come from this paper which describes what they call saliency visualization:
- Karen Simonyan, Andrea Vedaldi, Andrew Zisserman, Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps, http://arxiv.org/abs/1312.6034.
The rest of this article suggests ideas for how a new visualization such as inceptionism could be added to neural2d.
Adding inceptionism visualization to neural2d requires two steps: (1) figure out and implement the math, and (2) incorporate that with the neural2d visualization plumbing.
The mathy part involves multiplying the output of a convolution layer with the original input image to produce a new image where the extracted features get amplified. Besides the Simonyan paper mentioned earlier, also see Google’s description at http://googleresearch.blogspot.com/2015/06/inceptionism-going-deeper-into-neural.html.
In neural2d, a visualization is a diagnostic image created at runtime from the neural net state. When you run neural2d with the optional GUI, the graphic visualizations that are available for a particular topology are shown in a drop-down box like this:
Visualizations are handled by the four subclasses of class Layer. There is a subclass of class Layer for each kind of layer — regular, convolution filter, convolution network, and pooling. An inceptionistic visualization could be implemented in the pooling layer or regular layer, or both.
The drop-down menu of visualization choices is generated at run-time in the constructor for class Net by calling the .visualizationsAvailable() member in each Layer object and concatenating the results. A new menu option can be added in member function .visualizationsAvailable() in class LayerPooling or in LayerRegular or both.
The code that generates the visualization image can be added to member function .visualizeOutputs() in class LayerPooling or LayerRegular. That member function can call the utility functions createBMPImage() and base64Encode() to create the image suitable for viewing in the GUI. See Layer::visualizeOutputs() for an example of how to access the internal data structures for a layer’s outputs. See LayerConvolution::visualizeKernels() for examples of how to access the data structures for the convolution kernels.