Recent neural2d news

Ubuntu 16.04

We’re happy to report that neural2d has been tested in a fresh Ubuntu 16.04 installation, and — whew — it works (subject to the caveat below). That’s using CMake 3.5.1 and g++ 5.4. You are invited to comment on what operating systems and tool versions you have successfully used with neural2d.

Webserver

Thanks to a report from an alert participant, we found a problem where the optional webserver does not get compiled and linked by default as described in the documentation. In order to compile and link the webserver, run cmake with the option “-DWEBSERVER=ON”, and then rebuild the neural2d executable. For example:

cd build
cmake -DWEBSERVER=ON ..
make

Documentation

The README file in the top level directory has been updated with clearer instructions about preparing input data for the neural net, and with a few additional internal links and references. There’s no functional changes, and no new secrets revealed; just wordsmithing. Readers are encouraged to comment on the documentation or to submit additional documentation.

A new diagram showing file relationships was checked into the repository:

file-relationships

neuron

Time-dependence in biological neural inputs

Artificial neural nets and biological neural nets share many common characteristics, but one big difference is that artificial neurons typically operate in a static framework by outputting a single scalar value in response to their inputs, while biological neurons have a rich life in the time dimension and output sequences of pulses. Nobody is exactly sure what that means yet, but it’s pretty clear that our artificial neural nets do not yet model the time dimension of biological nets very well.

Here’s an article that explains how that the thousands of synaptic inputs to a neuron help it recognize sequences of patterns, not just static patterns. The authors say that they have discovered that the physical arrangement of input synapses can cause the “emergence of a computationally sophisticated sequence memory.” Also see this commentary about the article.

I’m very interested in hearing about your experiments with neural nets recognizing time-dependent sequences of patterns.

Git commit templates

One of neural2d’s contributors recently mentioned the advantages of using Git commit templates. It’s an easy way to make commit messages more consistent and useful. It’s such a good idea that I wanted to give it some exposure here.

The Git template places some text in the commit message dialog to help you remember how to format the commit messages consistently. It does not force you to format your commit messages in any particular way; it’s just a reminder. Instructions for setting up your own commit template can be found here.

Continue reading →

Siri does not love you. Yet.

According to these recent news reports, a robot has become conscious:

    This little robot just passed a self-awareness test

    Humanoid shows a glimmer of self-awareness

    World’s First Self-conscious Robots

    END of Humanity? Self Conscious robot pass final test

Despite the headlines, the robot in question did not become conscious. It solved a puzzle by following an algorithm. You could use pencil and paper and follow the same algorithmic calculations and arrive at the same answers the robot did.

There’s a big difference between human-like behavior driven by an algorithm, and the same behavior driven by conscious awareness and intention.

Continue reading →

Google’s Inceptionism

Google is in the news this week with their trippy neural net visualizations they call inceptionism.

google_inception_visualization_007

Inception was Google’s code name for a 22-layer deep convolutional neural network described in Christian Szegedy et al., Going Deeper with Convolutions, http://arxiv.org/abs/1409.4842 .

However, the idea for generating the trippy visualizations seems to have come from this paper which describes what they call saliency visualization:

  • Karen Simonyan, Andrea Vedaldi, Andrew Zisserman, Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps, http://arxiv.org/abs/1312.6034.

The rest of this article suggests ideas for how a new visualization such as inceptionism could be added to neural2d.

Continue reading →

ball-and-stick-illustration-1454x150

Blender ball-and-stick figures (and free wallpaper)

Free wallpapers:

neural2d-16x16-8x8-radius-2x2-Bneural2d-4x4-9x4x4neural2d-16x16-8x8-radius-2x2-A

The ball-and-stick illustrations used in the neural2d documentation were made with Blender. This article documents the Python scripting used to generate the connectors (sticks) between the neurons (the spheres) for the benefit of any Blender users who are trying to do something similar.

Continue reading →

layers-icon-148x148

Generalized layer depths

In neural2d, convolution network layers and pooling layers typically have a depth > 1, where the depth equals the number of kernels to train.

Previously, neural2d imposed certain restrictions on how layers with depth could be connected. The assumption was that if you wanted to go from a convolution network layer to a regular layer, the destination regular layer would have a depth of one.

There was no good reason to impose such a restriction, so neural2d now allows you to define regular layers with depth and connect them in any way to any other kind of layer. This means you can now insert a sparsely connected regular layer in between two convolution network layers with depth > 1 while preserving the depth of the pipeline.

Continue reading →

doodle-6-slice

Convolution something

Neural2d now does convolution networking, which is great, but it already did convolution filtering.

That’s confusing terminology. They sound almost alike.

In neural2d terminology, convolution networking is when you have a set of convolution kernels that you want to train to extract features from an input signal. Convolution filtering is when you have a single predetermined, constant kernel that you want to specify.

In the neural2d topology configuration syntax, a convolution network layer is defined something like:

layerConvolve size 20*64x64 from input convolve 7x7

A convolution filter layer is defined with a syntax like this:

layerConvolve size 64x64 from input convolve {{0,-1,0},{-1,5,-1},{0,-1,0}}

Personally, I’m happy with the configuration syntax, but is there less confusing terminology that we should use instead?