Recent neural2d news

Ubuntu 16.04

We’re happy to report that neural2d has been tested in a fresh Ubuntu 16.04 installation, and — whew — it works (subject to the caveat below). That’s using CMake 3.5.1 and g++ 5.4. You are invited to comment on what operating systems and tool versions you have successfully used with neural2d.

Webserver

Thanks to a report from an alert participant, we found a problem where the optional webserver does not get compiled and linked by default as described in the documentation. In order to compile and link the webserver, run cmake with the option “-DWEBSERVER=ON”, and then rebuild the neural2d executable. For example:

cd build
cmake -DWEBSERVER=ON ..
make

Documentation

The README file in the top level directory has been updated with clearer instructions about preparing input data for the neural net, and with a few additional internal links and references. There’s no functional changes, and no new secrets revealed; just wordsmithing. Readers are encouraged to comment on the documentation or to submit additional documentation.

A new diagram showing file relationships was checked into the repository:

file-relationships

Google’s Inceptionism

Google is in the news this week with their trippy neural net visualizations they call inceptionism.

google_inception_visualization_007

Inception was Google’s code name for a 22-layer deep convolutional neural network described in Christian Szegedy et al., Going Deeper with Convolutions, http://arxiv.org/abs/1409.4842 .

However, the idea for generating the trippy visualizations seems to have come from this paper which describes what they call saliency visualization:

  • Karen Simonyan, Andrea Vedaldi, Andrew Zisserman, Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps, http://arxiv.org/abs/1312.6034.

The rest of this article suggests ideas for how a new visualization such as inceptionism could be added to neural2d.

Continue reading →

layers-icon-148x148

Generalized layer depths

In neural2d, convolution network layers and pooling layers typically have a depth > 1, where the depth equals the number of kernels to train.

Previously, neural2d imposed certain restrictions on how layers with depth could be connected. The assumption was that if you wanted to go from a convolution network layer to a regular layer, the destination regular layer would have a depth of one.

There was no good reason to impose such a restriction, so neural2d now allows you to define regular layers with depth and connect them in any way to any other kind of layer. This means you can now insert a sparsely connected regular layer in between two convolution network layers with depth > 1 while preserving the depth of the pipeline.

Continue reading →

doodle-6-slice

Convolution something

Neural2d now does convolution networking, which is great, but it already did convolution filtering.

That’s confusing terminology. They sound almost alike.

In neural2d terminology, convolution networking is when you have a set of convolution kernels that you want to train to extract features from an input signal. Convolution filtering is when you have a single predetermined, constant kernel that you want to specify.

In the neural2d topology configuration syntax, a convolution network layer is defined something like:

layerConvolve size 20*64x64 from input convolve 7x7

A convolution filter layer is defined with a syntax like this:

layerConvolve size 64x64 from input convolve {{0,-1,0},{-1,5,-1},{0,-1,0}}

Personally, I’m happy with the configuration syntax, but is there less confusing terminology that we should use instead?

CMake to the rescue?

Now that neural2d uses CMake to configure the build system, you can build and run neural2d in Microsoft Visual Studio, or in many other environments. That’s a big win.

In this story, there are two heroes — CMake and open source. It was open-source collaboration that provided the impetus to convert to CMake.

But I had, and still have, two reservations about CMake. One concern is whether CMake is readily available in all environments in which neural2d could be used. The other concern is about the ease of use for the casual experimenter in a makefile-oriented environment. Pre-CMake, the build instructions were a single line:

make

Now the build instructions take up a whole A4-sized page. Should I be at all concerned about that?

Let me know what you think.