DeepFool: a simple and accurate method to fool deep neural networks

Moosavidezfooli S, Fawzi A, Frossard P, et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks[C]. computer vision and pattern recognition, 2016: 2574-2582.

@article{moosavidezfooli2016deepfool:,
title={DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks},
author={Moosavidezfooli, Seyedmohsen and Fawzi, Alhussein and Frossard, Pascal},
pages={2574–2582},
year={2016}}

本文从几何角度介绍了一种简单而有效的方法.

主要内容

adversarial的目的:
Δ(x;k^):=minrr2subjecttok^(x+r)k^(x),(1) \tag{1} \Delta(x;\hat{k}):= \min_{r} \|r\|_2 \: \mathrm{subject} \: \mathrm{to} \: \hat{k}(x+r) \not = \hat{k}(x),
其中k^(x)\hat{k}(x)为对xx的标签的一个估计.

二分类模型

当模型是一个二分类模型时,
k^(x)=sign(f(x)), \hat{k}(x) = \mathrm{sign}(f(x)),
其中f:RnRf:\mathbb{R}^n \rightarrow \mathbb{R}为分类器, 并记F:={x:f(x)=0}\mathcal{F}:= \{x: f(x)=0\}为分类边界.

ff为线性

f(x)=wTx+bf(x)=w^Tx+b:
DeepFool: a simple and accurate method to fool deep neural networks
假设x0x_0f(x)>0f(x)>0一侧, 则
r(x0)=f(x0)w22w. r_*(x_0)= -\frac{f(x_0)}{\|w\|_2^2}w.

ff为一般二分类

此时, 我们ff的一阶近似为
f(x0+r)f(x0)+Tf(x0)r, f(x_0+r)\approx f(x_0)+\nabla^T f(x_0) r,
此时分类边界为F={x:f(x0)+Tf(x0)(xx0)=0}\mathcal{F} =\{x:f(x_0)+\nabla^T f(x_0) (x-x_0)=0\},此时w=f(x0),b=f(x0),w=\nabla f(x_0),b=f(x_0),
r(x0)f(x0)f(x0)22f(x0).(4) \tag{4} r_*(x_0) \approx -\frac{f(x_0)}{\|\nabla f(x_0)\|_2^2} \nabla f(x_0).

所以, 每次
ri=f(xi)f(xi)22f(xi),xi+1=xi+ri, r_i = -\frac{f(x_i)}{\|\nabla f(x_i)\|_2^2} \nabla f(x_i), \\ x_{i+1} = x_i+r_i,
直到k^(xi)k^(x0)\hat{k}(x_i) \not= \hat{k}(x_0)是停止, 算法如下
DeepFool: a simple and accurate method to fool deep neural networks

多分类问题

f:RnRcf:\mathbb{R}^n \rightarrow \mathbb{R}^c, 此时
k^(x)=argmaxkfk(x).(5) \tag{5} \hat{k}(x) = \arg \max_k f_k(x).

ff仿射

f(x)=WTx+bf(x) = W^Tx + b, 设WW的第kk行为wkw_k,
P=k=1c{x:fk^(x0)(x)fk(x)},(7) \tag{7} P=\cap_{k=1}^c \{x: f_{\hat{k}(x_0)}(x) \ge f_k(x)\},
为判定为k^(x0)\hat{k}(x_0)的区域, 则x+rx+r应落在PcP^{c}, 而
Δ(x0;f)=dist(x0,Pc). \Delta (x_0;f)= \mathbf{dist} (x_0, P^c).
ff为仿射的时候, 实际上就是找x0x_0到各分类边界(与x0x_0有关的)最短距离,
l^(x0)=argminkk^(x0)fk(x0)fk^(x0)(x0)wkwk^(x0)2,(8) \tag{8} \hat{l}(x_0) = \arg \min _{k \not = \hat{k}(x_0)} \frac{|f_k(x_0) - f_{\hat{k}(x_0)}(x_0)|}{\|w_k-w_{\hat{k}(x_0)}\|_2},

r(x0)=fl^(x0)(x0)fk^(x0)(x0)wl^(x0)wk^(x0)22(wl^(x0)wk^(x0)),(9) \tag{9} r_*(x_0)= \frac{|f_{\hat{l}(x_0)}(x_0) - f_{\hat{k}(x_0)}(x_0)|}{\|w_{\hat{l}(x_0)}-w_{\hat{k}(x_0)}\|_2^2}(w_{\hat{l}(x_0)}-w_{\hat{k}(x_0)}),

ff为一般多分类

P~i=k=1c{x:fk^(x0)(xi)+Tfk^(x0)(xi)(xxi)fk(xi)+Tfk(xi)(xxi)},(10) \tag{10} \tilde{P}_i=\cap_{k=1}^c \{x: f_{\hat{k}(x_0)}(x_i) + \nabla^T f_{\hat{k}(x_0)}(x_i) (x-x_i)\ge f_k(x_i) + \nabla^Tf_k(x_i)(x-x_i)\},

ri(xi)=fl^(xi)(xi)fk^(x0)(xi)fl^(xi)(xi)fk^(x0)(xi)22(fl^(xi)(xi)fk^(x0)(xi)). r_i(x_i)=\frac{|f_{\hat{l}(x_i)}(x_i) - f_{\hat{k}(x_0)}(x_i)|}{\|\nabla f_{\hat{l}(x_i)}(x_i) - \nabla f_{\hat{k}(x_0)}(x_i)\|_2^2}(\nabla f_{\hat{l}(x_i)}(x_i) - \nabla f_{\hat{k}(x_0)}(x_i)).
DeepFool: a simple and accurate method to fool deep neural networks

lpl_p

p(1,)p \in (1, \infty)的时候
考虑如下的问题
minrpps.t.wT(x+r)+b=0, \begin{array}{ll} \min & \|r\|_p^p \\ \mathrm{s.t.} & w^T(x+r)+b=0, \end{array}
利用拉格朗日乘子
minrrpp+c(wT(x+r)+b), \min_r \: \|r\|_p^p + c(w^T(x+r)+b),
由KKT条件可知(这里的rkr_k表示第kk个元素)
prkp1=ckwk, p\: |r_k|^{p-1} = c_kw_k,
注: 这里有一个符号的问题, 但是可以把符号放入ckc_k中进而不考虑,

r=cwq1, r_*= c \odot w^{q-1},
其中q=pp1q=\frac{p}{p-1}为共轭指数, 并c=[c1,]Tc=[c_1,\ldots]^T,且ci=cj,|c_i|=|c_j|,wq1=[w1q1,]Tw^{q-1}=[|w_1|^{q-1},\ldots]^T,又
wT(x+cwq1)+b=0, w^T(x+c\odot w^{q-1})+b=0,

c=wTx+bwqq, |c|=\frac{|w^Tx+b|}{\|w\|_q^q} ,

r=wTx+bwqqwq1sign(w). r_*=-\frac{w^Tx+b}{\|w\|_q^q} w^{q-1} \odot \mathrm{sign}(w).

p=1p=1, 设ww的绝对值最大的元素为wmw_{m}, 则
r=wTx+bwm1m, r_*=-\frac{w^Tx+b}{w_m} \mathrm{1}_m,
1m\mathrm{1}_m为第mm个元素为1, 其余元素均为0的向量.
p=p=\infty,
r=wTx+bw1sign(w). r_*=-\frac{|w^Tx+b|}{\|w\|_1} \mathrm{sign} (w).

故:

p[1,)p \in [1, \infty):

DeepFool: a simple and accurate method to fool deep neural networks
p=p=\infty:
DeepFool: a simple and accurate method to fool deep neural networks

注: 因为, 仅仅到达边界并不足够, 往往希望更进一步, 所以在最后(?)x=x+(1+η)rx=x+ (1+\eta) r, 文中取η=0.02\eta=0.02.