RNN(二) 前向和BPTT
RNN(二) 前向和BPTT
标签(空格分隔): RNN BPTT
basic definition
To simply notation, the RNN here only contains one input layer, one hidden layer and one putput layer. Notations are listed below:
neural layer | node | index | number |
---|---|---|---|
input layer | x(t) | i | N |
previous hidden layer | s(t) | h | M |
hidden layer | s(t-1) | j | M |
output layer | y(t) | k | O |
input->hidden | V(t) | i,j | N->M |
previous hidden->hidden | U(t) | h,j | M->M |
hidden->output | W(t) | j,k | M->O |
Besides, P is the total number of available training samples which are indexed by l
forward
1. input->hidden
net j (t)=∑ i N x i (t)v ji +∑ h M s h (t−1)u jh +θ j netj(t)=∑iNxi(t)vji+∑hMsh(t−1)ujh+θj
δ lj =−(∑ k O ∂C∂y lk ∂y lk ∂net lk ∂net lk ∂s lj )∂s lj ∂net lj =∑ k O δ lk w kj
RNN(二) 前向和BPTT
标签(空格分隔): RNN BPTT
basic definition
To simply notation, the RNN here only contains one input layer, one hidden layer and one putput layer. Notations are listed below:
neural layer | node | index | number |
---|---|---|---|
input layer | x(t) | i | N |
previous hidden layer | s(t) | h | M |
hidden layer | s(t-1) | j | M |
output layer | y(t) | k | O |
input->hidden | V(t) | i,j | N->M |
previous hidden->hidden | U(t) | h,j | M->M |
hidden->output | W(t) | j,k | M->O |
Besides, P is the total number of available training samples which are indexed by l
forward
1. input->hidden
net j (t)=∑ i N x i (t)v ji +∑ h M s h (t−1)u jh +θ j netj(t)=∑iNxi(t)vji+∑hMsh(t−1)ujh+θj
δ lj =−(∑ k O ∂C∂y lk ∂y lk ∂net lk ∂net lk ∂s lj )∂s lj ∂net lj =∑ k O δ lk w kj