吴恩达深度学习2-Week1课后作业3-梯度检测
一、deeplearning-assignment
神经网络的反向传播很复杂,在某些时候需要对反向传播算法进行验证,以证明确实有效,这时我们引入了“梯度检测”。
反向传播需要计算梯度 , 其中θ表示模型的参数。J是使用前向传播和损失函数计算的。因为前向传播实现相对简单, 所以确信J的计算正确。现在让我们回头来看一下导数(或者梯度)的定义:
考虑一维线性函数 J(θ)=θx,该模型只包含一个实值参数θ, 并采取x作为输入。
你将实现代码去计算 J(.)和它的导数,然后你将使用“Gradient Checking”去确保你关于J的导数计算是正确的。
梯度检测原理:
二、相关算法代码
import numpy as np
from week1.testCases import gradient_check_n_test_case
from gc_utils import sigmoid, relu, dictionary_to_vector, vector_to_dictionary, gradients_to_vector
def forward_propagation(x, theta):
"""
:param x: a real-valued input
:param theta:our parameter, a real number as well
:return:J -- the value of function J, computed using the formula J(theta) = theta * x
"""
J = theta * x
return J
# x, theta = 2, 4
# J = forward_propagation(x, theta)
# print("J = " + str(J))
def backward_propagation(x, theta):
"""
:param x:a real-valued input
:param theta:our parameter, a real number as well
:return:dtheta -- the gradient of the cost with respect to theta
"""
dtheta = x
return dtheta
# x, theta = 2, 4
# dtheta = backward_propagation(x, theta)
# print("dtheta = " + str(dtheta))
def gradient_check(x, theta, epsilon=1e-7):
"""
:param x:a real-valued input
:param theta:our parameter, a real number as well
:param epsilon:tiny shift to the input to compute approximated gradient with formula(1)
:return:difference -- difference (2) between the approximated gradient and the backward propagation gradient
"""
thetaplus = theta + epsilon
thetaminus = theta - epsilon
J_plus = thetaplus * x
J_minus = thetaminus * x
gradapprox = (J_plus - J_minus) / (2 * epsilon)
grad = backward_propagation(x, theta)
numerator = np.linalg.norm(grad - gradapprox)
denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox)
difference = numerator / denominator
if difference < 1e-7:
print("The gradient is correct!")
else:
print("The gradient is wrong!")
return difference
# x, theta = 2, 4
# difference = gradient_check(x, theta)
# print("difference = " + str(difference))
def forward_propagation_n(X, Y, parameters):
"""
:param X: training set for m examples
:param Y:labels for m examples
:param parameters:W1 -- weight matrix of shape (5, 4)
b1 -- bias vector of shape (5, 1)
W2 -- weight matrix of shape (3, 5)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
:return:cost -- the cost function (logistic cost for one example)
"""
m = X.shape[1]
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
# Cost
logprobs = np.multiply(-np.log(A3), Y) + np.multiply(-np.log(1 - A3), 1 - Y)
cost = 1. / m * np.sum(logprobs)
cache = (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3)
return cost, cache
def backward_propagation_n(X, Y, cache):
"""
:param X:input datapoint, of shape (input size, 1)
:param Y:true "label"
:param cache:cache output from forward_propagation_n()
:return:gradients -- A dictionary with the gradients of the cost with respect to each parameter, activation and pre-activation variables.
"""
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1. / m * np.dot(dZ3, A2.T)
db3 = 1. / m * np.sum(dZ3, axis=1, keepdims=True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
# dZ2 = np.multiply(dA2, Z2)
dW2 = 1. / m * np.dot(dZ2, A1.T)
db2 = 1. / m * np.sum(dZ2, axis=1, keepdims=True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
# dZ1 = np.multiply(dA1, Z1)
dW1 = 1. / m * np.dot(dZ1, X.T)
db1 = 1. / m * np.sum(dZ1, axis=1, keepdims=True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,
"dA2": dA2, "dZ2": dZ2, "dW2": dW2, "db2": db2,
"dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
def gradient_check_n(parameters, gradients, X, Y, epsilon=1e-7):
"""
:param parameters:python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3"
:param gradients:output of backward_propagation_n, contains gradients of the cost with respect to the parameters
:param X:input datapoint, of shape (input size, 1)
:param Y:true "label"
:param epsilon:tiny shift to the input to compute approximated gradient with formula(1)
:return:difference -- difference (2) between the approximated gradient and the backward propagation gradient
"""
parameters_values, _ = dictionary_to_vector(parameters)
grad = gradients_to_vector(gradients)
num_parameters = parameters_values.shape[0]
J_plus = np.zeros((num_parameters, 1))
J_minus = np.zeros((num_parameters, 1))
gradapprox = np.zeros((num_parameters, 1))
for i in range(num_parameters):
thetaplus = np.copy(parameters_values)
thetaplus[i][0] = thetaplus[i][0] + epsilon
J_plus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaplus))
thetaminus = np.copy(parameters_values)
thetaminus[i][0] = thetaminus[i][0] - epsilon
J_minus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaminus))
gradapprox[i] = (J_plus[i] - J_minus[i]) / (2 * epsilon)
numerator = np.linalg.norm(gradapprox - grad)
denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox)
difference = numerator / denominator
if difference > 1e-6:
print(
"\033[93m" + "There is a mistake in the backward propagation! difference = " + str(difference) + "\033[0m")
else:
print(
"\033[92m" + "Your backward propagation works perfectly fine! difference = " + str(difference) + "\033[0m")
return difference
X, Y, parameters = gradient_check_n_test_case()
cost, cache = forward_propagation_n(X, Y, parameters)
gradients = backward_propagation_n(X, Y, cache)
difference = gradient_check_n(parameters, gradients, X, Y)
结果发现误差很大,通过debug检查代码,发现dW2和db1的代码出现错误,通过重新改正并运行,得到正确结果:
三、总结
- 梯度检测是非常慢的,出于这个原因我们不会在每次迭代时都运用梯度检测算法,只是偶尔检测一下梯度下降是否正确。
- 不要在dropout正则化后运用梯度检测,你只能在先通过梯度检测来对反向传播的梯度下降正确的前提下,再关掉它,再运用dropout。