In neural2d, convolution network layers and pooling layers typically have a depth > 1, where the depth equals the number of kernels to train.
Previously, neural2d imposed certain restrictions on how layers with depth could be connected. The assumption was that if you wanted to go from a convolution network layer to a regular layer, the destination regular layer would have a depth of one.
There was no good reason to impose such a restriction, so neural2d now allows you to define regular layers with depth and connect them in any way to any other kind of layer. This means you can now insert a sparsely connected regular layer in between two convolution network layers with depth > 1 while preserving the depth of the pipeline.
Previously, you could put a pooling layer between two convolution network layers, e.g.:
layerConv1 size 20*128x128 from input convolve 9x9 layerPool size 20*32x32 from layerConv1 pool max 4x4 layerConv2 size 20*32x32 from layerPool convolve 5x5
With the relaxed restrictions, you have the option now to put a regular layer between two convolution network layers. With a radius, the regular layer can downsample the source layer and reduce the number of neurons, similar to a pooling layer:
layerConv1 size 20*128x128 from input convolve 9x9 layerDownsample size 20*32x32 from layerConv1 radius 3x3 layerConv2 size 20*32x32 from layerDownsample convolve 5x5
With a pooling layer, the source neurons always come from a rectangular patch of source neurons. With a regular layer with a radius, the source neurons are taken from an elliptical patch, so you may want to overlap the patches (or not).
The new layer depth rules are as follows:
1. If the source and destination layers have equal depths, each destination neuron will connect only to neurons on the corresponding depth in the source layer.
2. Otherwise, each destination neuron will fully connect across all depths in the source layer.