Convolution something

Neural2d now does convolution networking, which is great, but it already did convolution filtering.

That’s confusing terminology. They sound almost alike.

In neural2d terminology, convolution networking is when you have a set of convolution kernels that you want to train to extract features from an input signal. Convolution filtering is when you have a single predetermined, constant kernel that you want to specify.

In the neural2d topology configuration syntax, a convolution network layer is defined something like:

layerConvolve size 20*64x64 from input convolve 7x7

A convolution filter layer is defined with a syntax like this:

layerConvolve size 64x64 from input convolve {{0,-1,0},{-1,5,-1},{0,-1,0}}

Personally, I’m happy with the configuration syntax, but is there less confusing terminology that we should use instead?

CMake to the rescue?

Now that neural2d uses CMake to configure the build system, you can build and run neural2d in Microsoft Visual Studio, or in many other environments. That’s a big win.

In this story, there are two heroes — CMake and open source. It was open-source collaboration that provided the impetus to convert to CMake.

But I had, and still have, two reservations about CMake. One concern is whether CMake is readily available in all environments in which neural2d could be used. The other concern is about the ease of use for the casual experimenter in a makefile-oriented environment. Pre-CMake, the build instructions were a single line:


Now the build instructions take up a whole A4-sized page. Should I be at all concerned about that?

Let me know what you think.