自然语言处理中的Attention机制
Attention in NLP
Advantage:
- integrate information over time
- handle variable-length sequences
- could be parallelized
Seq2seq
Encoder–Decoder framework:
Encoder:
Sutskeveretal.(2014) used an LSTM as f and
Decoder:
LEARNING TO ALIGN AND TRANSLATE
Decoder:
each conditional probability:
context vector :
in [1], is computed by:
https://zh.gluon.ai/chapter_natural-language-processing/attention.html
Kinds of attention
Hard and soft attention
hard attention 会专注于很小的区域,而 soft attention 的注意力相对发散
Global and local attention
四种alignment function计算方法:
小结:
attention in feed-forword NN
simplified version of attention:
Hierarchical Attention
word level attention:
sentence level attention:
inner attention mechanism:
annotation is first passed to a dense layer. An alignment coefficient is then derived by comparing the output of the dense layer with a trainable context vector (initialized randomly) and normalizing with a softmax. The attentional vector is finally obtained as a weighted sum of the annotations.
score can in theory be any alignment function. A straightforward approach is to use dot. The context vector can be interpreted as a representation of the optimal word, on average. When faced with a new example, the model uses this knowledge to decide which word it should pay attention to. During training, through backpropagation, the model updates the context vector, i.e., it adjusts its internal representation of what the optimal word is.
Note: The context vector in the definition of inner-attention above has nothing to do with the context vector used in seq2seq attention!
self-attention
Self-Attention 即 K=V=Q,例如输入一个句子,那么里面的每个词都要和该句子中的所有词进行 Attention 计算。目的是学习句子内部的词依赖关系,捕获句子的内部结构。
Conclusion
Attention 函数的本质可以被描述为一个查询(query)到一系列(键key-值value)对的映射。
将Source中的构成元素想象成是由一系列的<Key,Value>数据对构成,此时给定Target中的某个元素Query,通过计算Query和各个Key的相似性或者相关性,得到每个Key对应Value的权重系数,然后对Value进行加权求和,即得到了最终的Attention数值。所以本质上Attention机制是对Source中元素的Value值进行加权求和,而Query和Key用来计算对应Value的权重系数
Attention机制的具体计算过程,如果对目前大多数方法进行抽象的话,可以将其归纳为三个阶段:第一个阶段根据Query和Key计算两者的相似性或者相关性;第二个阶段对第一阶段的原始分值进行归一化处理;第三个阶段根据权重系数对Value进行加权求和。
- 在一般任务的Encoder-Decoder框架中,输入Source和输出Target内容是不一样的,比如对于英-中机器翻译来说,Source是英文句子,Target是对应的翻译出的中文句子,Attention机制发生在Target的元素Query和Source中的所有元素之间。K=V
- Self Attention是Source内部元素之间或者Target内部元素之间发生的Attention机制,也可以理解为Target=Source这种特殊情况下的注意力计算机制。Q=K=V
Paper:
[1] 《NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE》 https://arxiv.org/pdf/1409.0473v7.pdf
[2] 《Show, Attend and Tell: Neural Image Caption Generation with Visual Attention》 http://cn.arxiv.org/pdf/1502.03044v3.pdf
[2] 《Effective Approaches to Attention-based Neural Machine Translation》 http://cn.arxiv.org/pdf/1508.04025v5.pdf
[3] 《FEED-FORWARD NETWORKS WITH ATTENTION CAN SOLVE SOME LONG-TERM MEMORY PROBLEMS》 https://colinraffel.com/publications/iclr2016feed.pdf
[4] 《Hierarchical Attention Networks for Document Classification》 https://www.cs.cmu.edu/~diyiy/docs/naacl16.pdf
[5] 《Notes on Deep Learning for NLP》 https://arxiv.org/abs/1808.09772
[6] 《Attention Is All You Need》 https://arxiv.org/pdf/1706.03762.pdf
Blog:
https://richliao.github.io/supervised/classification/2016/12/26/textclassifier-HATN/
https://zh.gluon.ai/chapter_natural-language-processing/attention.html
Implement:
https://keras.io/layers/writing-your-own-keras-layers/ (The existing Keras layers provide examples of how to implement almost anything. Never hesitate to read the source code!)
https://github.com/richliao/textClassifier/blob/master/textClassifierRNN.py
https://github.com/bojone/attention/blob/master/attention_keras.py