Projects   >   Machine Learning   >   Network Prototypes
 
    Overview

    3L: all fully connected
    3L: SAE-CP + Softmax
    4L: SAE-CP + 2 fully connected
    4L: 2*SAE-CP + Softmax
    5L: 2*SAE-CP + 2 fully connected
    More ...

Overview
Here you can find Neural Network prototypes with different architectures. Prototyping is implemented in Octave / Matlab code and tested with image recognition tasks. With light modification for input layer can be used for any type of classification, search on image, image recognition/detection applications and much more.

Sources: https://github.com/neuro4j/neural-networks/tree/master/network-prototypes

 

3 layers network: all layers fully connected
3 layer neural network, multilayer perceptron


For example if there is input image 400 x 200 pixels x 1 channel (grayscale image); 10,000 neurons hidden layer; 500 output classes
Neural network has following amount of neurons in each layer:
L1 (400 X 200 X 1) -> L2 (10,000) -> L3 (500)
L1 (80,000) -> L2 (10,000) -> L3 (500)

You can try it with different settings.

Matlab/Octave Sources: https://github.com/neuro4j/neural-networks/tree/master/network-prototypes/net_3L_MLP

3 layers network: convolutional input layer (sparse autoencoders, convolution, pooling), softmax output layer
3 layer neural network


Here is used convolutional layer with max pooling trained with sparse auto-encoders and softmax output layer.
For example if there is
L1: input image 400 x 200 pixels x 1 channel (grayscale image);
L2: 100 features extracted with sparse auto-encoders, patch size - 6, pool size 15;
L3: 500 output classes;
Neural network has following amount of neurons in each layer:
L1 (400 X 200 X 1) -> L2 (26 X 13 X 100) -> L3 (500)
L1 (80,000) -> L2 (33,800) -> L3 (500)

You can try it with different settings.

Matlab/Octave Sources: https://github.com/neuro4j/neural-networks/tree/master/network-prototypes/net_3L_SAE-CP_Softmax

4 layers network: convolutional input layer (sparse autoencoders, convolution, pooling), 2 fully connected output layers
4 layer deep convolutional neural network


Here is used convolutional layer with max pooling trained with sparse auto-encoders and 2 fully connected output layers.
For example if there is
L1: input image 400 x 200 pixels x 1 channel (grayscale image);
L2: 100 features extracted with sparse auto-encoders, patch size - 6, pool size 15;
L3: 10,000 hidden units;
L4: 500 output classes;
Neural network has following amount of neurons in each layer:
L1 (400 X 200 X 1) -> L2 (26 X 13 X 100) -> L3 (10,000) -> L4 (500)
L1 (80,000) -> L2 (33,800) -> L3 (10,000) -> L4 (500)

You can try it with different settings.

Matlab/Octave Sources: https://github.com/neuro4j/neural-networks/tree/master/network-prototypes/net_4L_SAE-CP_MLP

4 layers network: 2 convolutional input layers (sparse autoencoders, convolution, pooling), softmax output layer
4 layer deep convolutional neural network


Here is used frontend with two convolutional layers with max pooling trained with sparse auto-encoders and softmax output layer.
For example if there is
L1: input image 400 x 200 pixels x 1 channel (grayscale image);
L2: 100 features extracted with sparse auto-encoders, patch size - 6, pool size 10;
L3: 200 features extracted with sparse auto-encoders, patch size - 3, pool size 3;
L4: 500 output classes;
Neural network has following amount of neurons in each layer:
L1 (400 X 200 X 1) -> L2 (39 X 19 X 100) -> L3 (12 X 5 X 200) -> L4 (500)
L1 (80,000) -> L2 (74,100) -> L3 (12,000) -> L4 (500)

You can try it with different settings.

Matlab/Octave Sources: https://github.com/neuro4j/neural-networks/tree/master/network-prototypes/net_4L_2xSAE-CP_Softmax

5 layers network: 2 convolutional input layers (sparse autoencoders, convolution, pooling), 2 fully connected output layers
5 layer deep convolutional neural network


Here is used frontend with two convolutional layers with max pooling trained with sparse auto-encoders and 2 fully connected output layers.
For example if there is
L1: input image 400 x 200 pixels x 1 channel (grayscale image);
L2: 100 features extracted with sparse auto-encoders, patch size - 6, pool size 10;
L3: 200 features extracted with sparse auto-encoders, patch size - 3, pool size 3;
L4: 10,000 output classes;
L5: 500 output classes;
Neural network has following amount of neurons in each layer:
L1 (400 X 200 X 1) -> L2 (39 X 19 X 100) -> L3 (12 X 5 X 200) -> L4 (10,000) -> L5 (500)
L1 (80,000) -> L2 (74,100) -> L3 (12,000) -> L4 (10,000) -> L5 (500)

You can try it with different settings.

Matlab/Octave Sources: https://github.com/neuro4j/neural-networks/tree/master/network-prototypes/net_5L_2xSAE-CP_MLP

More ...

More deep networks can be designed using different combinations of layers listed above. E.g. you can use more convolutional layers.
New networks with different features is going to be posted soon.

Powered by ESG