Existing adversarial attack methods mainly focus on generating attack samples via the misclassification supervision feedback of victim models (VM), while little attention is paid to what the VM really believes. This paper aims to generate realistic attack samples of person re-identification (ReID) by reading enemy's mind(VM). Three inherent benefits could be uncovered: (1) Directly leveraging VM's knowledge to attack is transferable to test set in open-set ReID. (2) Cheating the belief of VM could mislead it easier. (3) Since VM only remembers the clean images, cheating their minds could boost the attack image generation to be realistic and undetectable. However, how to read VM's mind and then cheat it is intractable. In this paper, we propose a novel inconspicuous and controllable ReID attack baseline, LCYE Look Closer to Your Enemy, to generate adversarial query images. Concretely, LCYE first distills VM's knowledge via teacher-student memory mimicking in the proxy task. Then this knowledge prior acts as an explicit cipher conveying what is essential and realistic, believed by VM, for accurate adversarial misleading. Besides, benefiting from the multiple opposing task framework of LCYE, we further investigate the interpretability and generalization of ReID models from the view of the adversarial attack, including cross-domain adaption, cross-model consensus, and online learning process. Extensive experiments on four ReID benchmarks show that our method outperforms other state-of-the-art attackers with a large margin in white-box, black-box, and target attacks.
Create a directory to store reid datasets under this repo
mkdir data/
If you wanna store datasets in another directory, you need to specify --root path_to_your/data
when running the training code. Please follow the instructions below to prepare each dataset. After that, you can simply do -d the_dataset
when running the training code.
Market1501 :
data/
from http://www.liangzheng.org/Project/project_reid.html.market1501
. The data structure would look like:market1501/
bounding_box_test/
bounding_box_train/
...
-d market1501
when running the training code.CUHK03 [13]:
cuhk03/
under data/
.data/cuhk03/
from http://www.ee.cuhk.edu.hk/~xgwang/CUHK_identification.html and extract cuhk03_release.zip
, so you will have data/cuhk03/cuhk03_release
.cuhk03_new_protocol_config_detected.mat
and cuhk03_new_protocol_config_labeled.mat
. Put these two mat files under data/cuhk03
. Finally, the data structure would look likecuhk03/
cuhk03_release/
cuhk03_new_protocol_config_detected.mat
cuhk03_new_protocol_config_labeled.mat
...
-d cuhk03
when running the training code. In default mode, we use new split (767/700). If you wanna use the original splits (1367/100) created by [13], specify --cuhk03-classic-split
. As [13] computes CMC differently from Market1501, you might need to specify --use-metric-cuhk03
for fair comparison with their method. In addition, we support both labeled
and detected
modes. The default mode loads detected
images. Specify --cuhk03-labeled
if you wanna train and test on labeled
images.DukeMTMC-reID [16, 17]:
data/
called dukemtmc-reid
.DukeMTMC-reID.zip
from https://github.com/layumi/DukeMTMC-reID_evaluation#download-dataset and put it to data/dukemtmc-reid
. Extract the zip file, which leads todukemtmc-reid/
DukeMTMC-reid.zip # (you can delete this zip file, it is ok)
DukeMTMC-reid/ # this folder contains 8 files.
-d dukemtmcreid
when running the training code.MSMT17 [22]:
msmt17/
under data/
.MSMT17_V1.tar.gz
to data/msmt17/
from http://www.pkuvmc.com/publications/msmt17.html. Extract the file under the same folder, so you will havemsmt17/
MSMT17_V1.tar.gz # (do whatever you want with this .tar file)
MSMT17_V1/
train/
test/
list_train.txt
... (totally six .txt files)
-d msmt17
when running the training code.mkdir models/
Download the pretrained models or train the models from scratch by yourself offline
2.1 Download Links
2.2 Training models from scratch (optional)
Create a directory named by the targeted model (like aligned/
or hacnn/
) following __init__.py
under models/
and move the checkpoint of pretrained models to this directory. Details of naming rules can refer to the download link.
Customized ReID models (optional)
It is easy to test the robustness of any customized ReID models following the above steps (1→2.2→3). The extra thing you need to do is to add the structure of your own models to models/
and register it in__init__.py
.
Take attacking AlignedReID trained on Market1501 as an example:
python train.py \ --targetmodel='aligned' \ --dataset='market1501'\ --mode='train' \ --loss='xent_htri' \ --ak_type=-1 \ --temperature=-1 \ --use_SSIM=2 \ --epoch=40
Take attacking AlignedReID trained on Market1501 as an example:
python train.py \ --targetmodel='aligned' \ --dataset='market1501'\ --G_resume_dir='./logs/aligned/market1501/best_G.pth.tar' \ --mode='test' \ --loss='xent_htri' \ --ak_type=-1 \ --temperature=-1 \ --use_SSIM=2 \ --epoch=40