[CVPR 2019 论文笔记] Generalized Zero- and Few-Shot Learning via Aligned Variational Autoencoders

广义少样本学习之对齐VAE

[CVPR 2019 论文笔记] Generalized Zero- and Few-Shot Learning via Aligned Variational Autoencoders
本文亮点:学习图像和语义共享的隐含空间,为未见类生成隐含特征。


论文下载

CVPR 2019


VAE 变分自编码器

变分自编码器是一种生成模型。它包含两部分,编码器和解码器。首先,编码器在样本 xx 上学习一个样本特定的正态分布;然后,从这个正态分布中随机采样一个变量;最后,解码器将这个变量作为输入,然后生成一个样本 x^\hat x

模型

[CVPR 2019 论文笔记] Generalized Zero- and Few-Shot Learning via Aligned Variational Autoencoders 跨域对齐、分布对齐变分自编码器

Cross and Distribution Aligned VAE

basic M VAE losses VAE损失

(2)LVAE=iMEqϕ(zx)[logp0(x(i)z)]βDKL(qϕ(zx(i))pθ(z))\mathcal{L}_{VAE} = \sum_i^M \mathbb{E}_{q_{\phi (z|x)}} [\log{p_0(x^{(i)}|z)}] \\ -\beta D_{KL}(q_{\phi}(z|x^{(i)})||p_{\theta}(z)) \tag{2}

Cross-Alignment (CA) Loss 跨域对齐损失

(3)LCA=iMjiMx(j)Dj(Ei(x(i)))\mathcal{L}_{CA} = \sum_i^M \sum_{j \neq i}^M |x^{(j)} - D_j(E_i(x^{(i)}))| \tag{3}

Distribution-Alignment (DA) Loss 分布对齐损失
分布i和分布j的2-Wasserstein 距离的闭形式解如下:

(4)Wij=[μiμj22+Tr(i)+Tr(j)2(i12ij12)12]12 W_{ij} = [||\mu_i - \mu_j||_2^2\\ + Tr(\sum_i) + Tr(\sum_j) - 2 (\sum_i^{\frac{1}{2}} \sum_i \sum_j^{\frac{1}{2}})^{\frac{1}{2}}]^{\frac{1}{2}} \tag{4}

由于编码器预测对角协方差矩阵,这是交换的,这个距离可以简化:

(5)Wij=(μiμj22+i12j12Frobenius2)12 W_{ij} = (||\mu_i - \mu_j||_2^2 + ||\sum_i^{\frac{1}{2}} - \sum_j^{\frac{1}{2}}||_{Frobenius}^{2})^{\frac{1}{2}} \tag{5}

所以,对于M个域DA损失如下:
(6)LDA=iMjiMWij\mathcal{L}_{DA} = \sum_i^M \sum_{j \neq i}^M W_{ij} \tag{6}

CADA-VAE loss

(7)LCADAVAE=LVAE+γLCA+δLDA \mathcal{L}_{CADA-VAE} = \mathcal{L}_{VAE} + \gamma \mathcal{L}_{CA} + \delta \mathcal{L}_{DA} \tag{7}


参考

变分自编码器 - 蜉蝣之翼