Neural2d now does convolution networking, which is great, but it already did convolution filtering.
That’s confusing terminology. They sound almost alike.
In neural2d terminology, convolution networking is when you have a set of convolution kernels that you want to train to extract features from an input signal. Convolution filtering is when you have a single predetermined, constant kernel that you want to specify.
In the neural2d topology configuration syntax, a convolution network layer is defined something like:
layerConvolve size 20*64x64 from input convolve 7x7
A convolution filter layer is defined with a syntax like this:
layerConvolve size 64x64 from input convolve {{0,-1,0},{-1,5,-1},{0,-1,0}}
Personally, I’m happy with the configuration syntax, but is there less confusing terminology that we should use instead?