SalFormer_GitFront
README.md

SalFormer ....

Unifying convolution and transformer: A dual stage network equipped with cross-interactive feature fusion and edge guidance for RGB-D salient object detection

model_arch

Requirements

Python 3.7, Pytorch 0.4.0+, Cuda 10.0, TensorboardX 2.0, opencv-python

Dataset access

Train data link: https://drive.google.com/file/d/1yjtYG_05Nj7_G-DO1DygBhlkk_U3vDwo/view?usp=sharing

Test data link: https://drive.google.com/file/d/1pGq4nehuv7gJDENEWD2cuKTz937tCoVO/view?usp=sharing

Validation data link: https://drive.google.com/file/d/13FRrzznTAnVAIdCeq38s1STfk1JOS6it/view?usp=sharing

Data preparation

Depth maps are not in HHA converted format. The depth images can be made to pass through HHA algorithm, as given in tohha.py

Results

We provide testing results of 7 datasets which can be accessed at https://drive.google.com/file/d/1KpDca0PnC4vL0M13HALYWK1wjkgZ4Eev/view?usp=sharing

Evaluate the result maps: You can evaluate the result maps using Python_Eval