Briefly Introducing Fully-Connected Layers
- As seen in regular neural networks, neurons in a fully connected layer have full connections to all activations in the previous layer
- This layer takes an input volume and outputs an dimensional vector, where is the number of classes we are considering
- For example, if we wanted to classify input images as a number between , then
Illustrating LeNet-5 Implementation
Layer | Activation Shape | Activation Size | # Parameters |
---|---|---|---|
x | |||
CONV1 | |||
POOL1 | |||
CONV2 | |||
POOL2 | |||
FC3 | |||
FC4 | |||
Softmax |
Observations from LeNet-5 Cast Study
- There are a few common patterns found throughout convolutional neural networks, which can be observed in the lenet-5 network
-
As we go deeper in our network:
- and tend to decrease
- tends to increase
- Convolutional layers have very few parameters
- Pooling layers don't have any parameters
- Fully-connected layers have the most parameters in our network
- The size of our activations gradually decrease as we go deeper
- There will be a negative impact on performance if the size of activations decreases too quickly or too slowly as we travel deeper in our network
- Typically, a convolutional network will follow this pattern:
- Here, the denotes repeating and layers
- And, the denotes repeating layers
tldr
- As seen in regular neural networks, neurons in a fully connected layer have full connections to all activations in the previous layer
- This layer takes an input volume and outputs an dimensional vector, where is the number of classes we are considering
- For example, if we wanted to classify input images as a number between , then
References
Previous
Next