Coursera-吴恩达-深度学习-第四门课-卷积神经网络 -week1-编程作业
本文章内容:
Coursera吴恩达深度学习课程,
第四课: 卷积神经网络(Convolutional Neural Networks)
第一周: 卷积神经网络(Foundations of Convolutional Neural Networks)
编程作业,记错本。
Convolutional Neural Networks: Step by Step
2 - Outline of the Assignment
You will be implementing the building blocks of a convolutional neural network! the steps needed:
- Convolution functions, including:
- Zero Padding
- Convolve window
- Convolution forward
- Convolution backward (optional)
- Pooling functions, including:
- Pooling forward
- Create mask
- Distribute value
- Pooling backward (optional)
the following model:
Note that for every forward function, there is its corresponding backward equivalent.
Hence, at every step of your forward module you will store some parameters in a cache. These parameters are used to compute gradients during backpropagation.
3 - Convolutional Neural Networks
A convolution layer transforms an input volume into an output volume of different size, as shown below.
In this part, you will build every step of the convolution layer.
You will first implement two helper functions:
zero padding and computing the convolution function itself.
3.1 - Zero-Padding
Zero-padding adds zeros around the border of an image:
The main benefits of padding are the following:
-
It allows you to use a CONV layer without necessarily shrinking the height and width of the volumes. This is important for building deeper networks, since otherwise the height/width would shrink as you go to deeper layers. An important special case is the "same" convolution, in which the height/width is exactly preserved after one layer.
-
It helps us keep more of the information at the border of an image. Without padding, very few values at the next layer would be affected by pixels as the edges of an image
X_pad = np.pad(X, ((0,0), (pad,pad), (pad,pad), (0,0)), 'constant', constant_values = 0)
3.2 - Single step of convolution
iimplement a single step of convolution, in which you apply the filter to a single position of the input. This will be used to build a convolutional unit, which:
- Takes an input volume
- Applies a filter at every position of the input
- Outputs another volume (usually of different size)
In a computer vision application, each value in the matrix on the left corresponds to a single pixel value, and we convolve a 3x3 filter with the image by multiplying its values element-wise with the original matrix, then summing them up and adding a bias. In this first step of the exercise, you will implement a single step of convolution, corresponding to applying a filter to just one of the positions to get a single real-valued output.
s = a_slice_prev * W Z = np.sum(s) Z = Z + b |
# Element-wise product between a_slice and W. Do not add the bias yet. s = np.multiply(a_slice_prev,W) # Sum over all entries of the volume s. Z = np.sum(s) # Add bias b to Z. Cast b to a float() so that Z results in a scalar value. Z = float(b)+Z |
float(b)的用法 np.multiply(a_slice_prev,W)的用法 |
3.3 - Convolutional Neural Networks - Forward pass
In the forward pass, you will take many filters and convolve them on the input. Each 'convolution' gives you a 2D matrix output. You will then stack these outputs to get a 3D volume:
4 - Pooling layer
pooling (POOL) layer reduces the height and width of the input.
It helps reduce computation, as well as helps make feature detectors more invariant to its position in the input.
The two types of pooling layers are:
-
Max-pooling layer: slides an (f,f) window over the input and stores the max value of the window in the output.
-
Average-pooling layer: slides an (f,f) window over the input and stores the average value of the window in the output.
These pooling layers have no parameters for backpropagation to train. However, they have hyperparameters such as the window size f. This specifies the height and width of the fxf window you would compute a max or average over.
4.1 - Forward Pooling
Now, you are going to implement MAX-POOL and AVG-POOL, in the same function.
5 - Backpropagation in convolutional neural networks
In modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers don't need to bother with the details of the backward pass. The backward pass for convolutional networks is complicated.
When in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in convolutional neural networks you can to calculate the derivatives with respect to the cost in order to update the parameters.
5.1 - Convolutional layer backward pass
5.1.1 - Computing dA:
his is the formula for computing dAdA with respect to the cost for a certain filter WcWc and a given training example:
Where WcWc is a filter and dZhwdZhw is a scalar corresponding to the gradient of the cost with respect to the output of the conv layer Z at the hth row and wth column (corresponding to the dot product taken at the ith stride left and jth stride down). Note that at each time, we multiply the the same filter WcWc by a different dZ when updating dA. We do so mainly because when computing the forward propagation, each filter is dotted and summed by a different a_slice. Therefore when computing the backprop for dA, we are just adding the gradients of all the a_slices.
5.1.2 - Computing dW:
This is the formula for computing dWcdWc (dWcdWc is the derivative of one filter) with respect to the loss:
Where asliceaslice corresponds to the slice which was used to generate the acitivation ZijZij. Hence, this ends up giving us the gradient for WW with respect to that slice. Since it is the same WW, we will just add up all such gradients to get dWdW.
5.1.3 - Computing db:
This is the formula for computing dbdb with respect to the cost for a certain filter WcWc:
As you have previously seen in basic neural networks, db is computed by summing dZdZ. In this case, you are just summing over all the gradients of the conv output (Z) with respect to the cost.