Google’s Inceptionism

Google is in the news this week with their trippy neural net visualizations they call inceptionism.

google_inception_visualization_007

Inception was Google’s code name for a 22-layer deep convolutional neural network described in Christian Szegedy et al., Going Deeper with Convolutions, http://arxiv.org/abs/1409.4842 .

However, the idea for generating the trippy visualizations seems to have come from this paper which describes what they call saliency visualization:

  • Karen Simonyan, Andrea Vedaldi, Andrew Zisserman, Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps, http://arxiv.org/abs/1312.6034.

The rest of this article suggests ideas for how a new visualization such as inceptionism could be added to neural2d.

Continue reading →

layers-icon-148x148

Generalized layer depths

In neural2d, convolution network layers and pooling layers typically have a depth > 1, where the depth equals the number of kernels to train.

Previously, neural2d imposed certain restrictions on how layers with depth could be connected. The assumption was that if you wanted to go from a convolution network layer to a regular layer, the destination regular layer would have a depth of one.

There was no good reason to impose such a restriction, so neural2d now allows you to define regular layers with depth and connect them in any way to any other kind of layer. This means you can now insert a sparsely connected regular layer in between two convolution network layers with depth > 1 while preserving the depth of the pipeline.

Continue reading →

doodle-6-slice

Convolution something

Neural2d now does convolution networking, which is great, but it already did convolution filtering.

That’s confusing terminology. They sound almost alike.

In neural2d terminology, convolution networking is when you have a set of convolution kernels that you want to train to extract features from an input signal. Convolution filtering is when you have a single predetermined, constant kernel that you want to specify.

In the neural2d topology configuration syntax, a convolution network layer is defined something like:

layerConvolve size 20*64x64 from input convolve 7x7

A convolution filter layer is defined with a syntax like this:

layerConvolve size 64x64 from input convolve {{0,-1,0},{-1,5,-1},{0,-1,0}}

Personally, I’m happy with the configuration syntax, but is there less confusing terminology that we should use instead?