【EUG】Exploit the Unknown Gradually: One-Shot Video-Based Person Re-Identification by Stepwise Learn
【EUG】Exploit the Unknown Gradually: One-Shot Video-Based Person Re-Identification by Stepwise Learn
Bibtex
@inproceedings{eug,
title = {Exploit the Unknown Gradually: One-Shot Video-Based Person Re-Identification by Stepwise Learning},
author = {Wu, Yu and Lin, Yutian and Dong, Xuanyi and Yan, Yan and Ouyang, Wanli and Yang, Yi},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}
Public information
CVPR 2018
Fields
- Person Re-ID
- one-shot learning
Code link
https://github.com/Yu-Wu/Exploit-Unknown-Gradually
Main work
use one-shot method to deal with Re-ID task withing two steps iterated: 1)fully supervise training CNN model; 2) Estimate the persudo labels for a mount of sample without label according to the L2 distance in the feature space extrated with CNN model.
Key technology
- CNN
- feature extraction
- metric of samply similarity
Framework
Figure 2. Overview of the framework. Different colors represent different identity samples. The CNN model is initially trained on thelabeled one-shot data. For each iteration, we (1) select the unlabeled samples with reliable pseudo labels according to the distance infeature space and (2) update the CNN model by the labeled data and the selected candidates. We gradually enlarge the candidates setto incorporating more difficult and diverse tracklets. For a tracklet, each frame feature is first extracted by the CNN model and thentemporally averaged as the tracklet feature. We take the training process as an identity classification task, and regard the evaluation as aretrieval problem on the features of the test tracklets.
Dataset
- Mars
- DukeMTMC-VideoReID