The public GitHub link will be released in the final submission of our paper. For the artifacts evaluation, we provide this GitFront link.
Drone_Audio (temporary).
DOI of the drone audio recordings. This link will be activated in the final submission. The content is the same as the temporary link above.
The platform is Window 10. (Ubuntu 20.04 may be fine.)
Please install Anaconda at first. If you do not have Anaconda, please see the following link.
Then create a new environment: conda env create -f environment.yml
You need about 80G of storage space to generate the PKL dataset and models.
Build MFCC dataset based on collected drone audio. The format of the generated dataset is ".pkl".
The code of all experiments, which are mentioned in the paper.
These files help with the training and evaluation process.
Some modules can be used in different programs.
originData_path: The root of all drone audio.output_path: Where to output/obtain the model.csv_savePath: Where to save the evaluation results.pkl_savePath: Where to save the dataset of extracted MFCC features.Download the drone audio dataset, then change all originData_path in all config files to the root of download drone audio.
./dataset_build/pkl_gen_timeVar.py to generate the dataset in .pkl format. This dataset is created from DS1.
./pkl_dataset/1_timeVar../experiment/timeVar/train_all_model.py to train 8 different ML models on all generated datasets.
./trained_model/1_timeVar../experiment/timeVar/eval_all_model.py to obtain the accuracy of each model on test set.
./result/1_timeVar.There are 9 config files for training different models.
config_filterVar_1_oneThirdconfig_filterVar_1_twoThirdsconfig_filterVar_1_allconfig_filterVar_2_oneThirdconfig_filterVar_2_twoThirdsconfig_filterVar_2_allconfig_filterVar_3_oneThirdconfig_filterVar_3_twoThirdsconfig_filterVar_3_all./dataset_build/pkl_gen_filterVar.py to generate the dataset in .pkl format. This dataset is created from DS1../experiment/filterVar/train_all_model_filterVar.py to train different models../experiment/filterVar/eval_all_model_filterVar.py to evaluate different models../experiment/filterVar/train_all_model_filterVar.py.
./trained_model/2_filterVar/8d_x_xxxx../experiment/filterVar/eval_all_model_filterVar.py.
./result/2_filterVar/8d_x_xxxx../dataset_build/pkl_gen_filterVar_noise.py to generate the dataset in .pkl format. This dataset is created from DS1N../experiment/filterVar/eval_all_model_filterVar_noise.py to evaluate different models../experiment/filterVar/eval_all_model_filterVar_noise.py.
./result/3_filterVar_noise/8d_x_xxxx.This experiment are conducted on DS2.
./experiment/noiseVar/train_all_model_noNoise.py.
./trained_model/4_noiseVar/noNoise../experiment/noiseVar/eval_all_model_noNoise.py.
This experiment are conducted on DS2N.
./dataset_build/pkl_gen_noiseVar.py to generate the dataset in .pkl format. This dataset is created from DS2N../experiment/noiseVar/eval_all_model_noiseVar.py.
./result/4_noiseVar../dataset_build/pkl_gen_base.py to generate the dataset in .pkl format. This dataset is created from DS2../experiment/attack/train_attack.py.dic_reg, dic_attack, dic_bg shown in console.

args.bg_type and args.attack_type in ./experiment/attack/evaluate_attack.py to dic_bg and dic_attack, respectively.

./experiment/attack/evaluate_attack.py.