增量学习——Maintaining Discrimination and Fairness in Class Incremental Learning——CVPR2020

增量学习——Maintaining Discrimination and Fairness in Class Incremental Learning——CVPR2020

Abstract

knowledge distillation; 造成灾难性遗忘的很大一个原因是the weights in the last fully connected layer are highly biased in class-incremental learning;

Introduction

增量学习——Maintaining Discrimination and Fairness in Class Incremental Learning——CVPR2020

Conclusion

maintain the discrimination via knowledge distillation and maintains the fairness via a method called weight aligning

Key points: code开源;这篇文章的思路是基于《large scale incremental learning,CVPR2019》和《Learning a unified classifier via rebalancing》做的;实验性文章;也是基于rehearsal strategy;找到一个切入点做一个工作,做出好的实验结果