Learning Against Non-Stationary Agents withOpponent Modelling & Deep Reinforcement Learning

17年nips beach的文章,这个文章是通过为对手建模,然后更好的切换自己的对战策略的一个方法。

这篇文章,主要是对里面不确定度有了很好的应用,才能正确的在不同的策略之间比较正确的切换。

主要是对两种方式进行了实验:

Learning Against Non-Stationary Agents withOpponent Modelling & Deep Reinforcement Learning

首先我们针对第一种来设计算法,而本文也是主要针对第一种情况进行的,第二种用来比较

本文提出了SAM算法switching Agent Model

首先我们逐步介绍他的compnents:

SwitchBoard:通过追踪对手model的表现,在自身策略之间进行转换

为了实现这样的目的,需要追踪对手模型,并计算运行时误差。当一个agent改变行为,并随着时间的推移逐渐积累。此时运行时

误差可以用来改变模型的应对策略。

opponent model :Learning Against Non-Stationary Agents withOpponent Modelling & Deep Reinforcement Learning  running error:Learning Against Non-Stationary Agents withOpponent Modelling & Deep Reinforcement Learning

每一个时间步骤,opponent model 都需要通过Monte Carlo dropout预测对手下一步的动作Learning Against Non-Stationary Agents withOpponent Modelling & Deep Reinforcement Learning和预测的不确定性Learning Against Non-Stationary Agents withOpponent Modelling & Deep Reinforcement Learning

对手真正的动作为:Learning Against Non-Stationary Agents withOpponent Modelling & Deep Reinforcement Learning。于是running error r通过下面的更新公式更新:Learning Against Non-Stationary Agents withOpponent Modelling & Deep Reinforcement Learning

如果r小于r max那么running error衰减d(超参),否则的话,更换opponent model,并将r置0

伪代码如下:

Learning Against Non-Stationary Agents withOpponent Modelling & Deep Reinforcement Learning

 

Response Policies:

这个策略使用的是DDPG,通过对手模型以及当前的obs来response自身要应对的策略

我们的一般策略,由多个子策略组成:Learning Against Non-Stationary Agents withOpponent Modelling & Deep Reinforcement Learning,这使得我们可以根据不同的opponent model切换不同的policy

每个策略Learning Against Non-Stationary Agents withOpponent Modelling & Deep Reinforcement Learning是通过他们自己的replay buffer来sample训练的。

Learning Against Non-Stationary Agents withOpponent Modelling & Deep Reinforcement Learning,其中 Nt表示噪音

Opponent Models:

先上伪代码:

Learning Against Non-Stationary Agents withOpponent Modelling & Deep Reinforcement Learning

loss function Learning Against Non-Stationary Agents withOpponent Modelling & Deep Reinforcement Learning

其中SIT−1是代理先前观察到的状态,AJT−1是agent执行的真正动作。

整个算法如上,实验部分:

对比了SAM和DDPG对抗一个不稳定的在一段时间内在策略之间切换的对手

为了训练这个对手,我们单独训练了两个agent:

Learning Against Non-Stationary Agents withOpponent Modelling & Deep Reinforcement Learning

我们的对手通过在这两个学习的策略上进行切换(不在进行学习)

结果如下图:

Learning Against Non-Stationary Agents withOpponent Modelling & Deep Reinforcement Learning

同时对比了下面对比了

Uncertainty Switching Adversary和Uncertainty Learning Agent:

Learning Against Non-Stationary Agents withOpponent Modelling & Deep Reinforcement Learning

相比之下learning agent的不确定性度量的准确性要低一些。