Time-dependence in biological neural inputs

Artificial neural nets and biological neural nets share many common characteristics, but one big difference is that artificial neurons typically operate in a static framework by outputting a single scalar value in response to their inputs, while biological neurons have a rich life in the time dimension and output sequences of pulses. Nobody is exactly sure what that means yet, but it’s pretty clear that our artificial neural nets do not yet model the time dimension of biological nets very well.

Here’s an article that explains how that the thousands of synaptic inputs to a neuron help it recognize sequences of patterns, not just static patterns. The authors say that they have discovered that the physical arrangement of input synapses can cause the “emergence of a computationally sophisticated sequence memory.” Also see this commentary about the article.

I’m very interested in hearing about your experiments with neural nets recognizing time-dependent sequences of patterns.

Git commit templates

One of neural2d’s contributors recently mentioned the advantages of using Git commit templates. It’s an easy way to make commit messages more consistent and useful. It’s such a good idea that I wanted to give it some exposure here.

The Git template places some text in the commit message dialog to help you remember how to format the commit messages consistently. It does not force you to format your commit messages in any particular way; it’s just a reminder. Instructions for setting up your own commit template can be found here.

Continue reading →

Siri does not love you. Yet.

According to these recent news reports, a robot has become conscious:

    This little robot just passed a self-awareness test

    Humanoid shows a glimmer of self-awareness

    World’s First Self-conscious Robots

    END of Humanity? Self Conscious robot pass final test

Despite the headlines, the robot in question did not become conscious. It solved a puzzle by following an algorithm. You could use pencil and paper and follow the same algorithmic calculations and arrive at the same answers the robot did.

There’s a big difference between human-like behavior driven by an algorithm, and the same behavior driven by conscious awareness and intention.

Continue reading →

Google’s Inceptionism

Google is in the news this week with their trippy neural net visualizations they call inceptionism.


Inception was Google’s code name for a 22-layer deep convolutional neural network described in Christian Szegedy et al., Going Deeper with Convolutions, .

However, the idea for generating the trippy visualizations seems to have come from this paper which describes what they call saliency visualization:

  • Karen Simonyan, Andrea Vedaldi, Andrew Zisserman, Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps,

The rest of this article suggests ideas for how a new visualization such as inceptionism could be added to neural2d.

Continue reading →


Blender ball-and-stick figures (and free wallpaper)

Free wallpapers:


The ball-and-stick illustrations used in the neural2d documentation were made with Blender. This article documents the Python scripting used to generate the connectors (sticks) between the neurons (the spheres) for the benefit of any Blender users who are trying to do something similar.

Continue reading →


Convolution something

Neural2d now does convolution networking, which is great, but it already did convolution filtering.

That’s confusing terminology. They sound almost alike.

In neural2d terminology, convolution networking is when you have a set of convolution kernels that you want to train to extract features from an input signal. Convolution filtering is when you have a single predetermined, constant kernel that you want to specify.

In the neural2d topology configuration syntax, a convolution network layer is defined something like:

layerConvolve size 20*64x64 from input convolve 7x7

A convolution filter layer is defined with a syntax like this:

layerConvolve size 64x64 from input convolve {{0,-1,0},{-1,5,-1},{0,-1,0}}

Personally, I’m happy with the configuration syntax, but is there less confusing terminology that we should use instead?

CMake to the rescue?

Now that neural2d uses CMake to configure the build system, you can build and run neural2d in Microsoft Visual Studio, or in many other environments. That’s a big win.

In this story, there are two heroes — CMake and open source. It was open-source collaboration that provided the impetus to convert to CMake.

But I had, and still have, two reservations about CMake. One concern is whether CMake is readily available in all environments in which neural2d could be used. The other concern is about the ease of use for the casual experimenter in a makefile-oriented environment. Pre-CMake, the build instructions were a single line:


Now the build instructions take up a whole A4-sized page. Should I be at all concerned about that?

Let me know what you think.