CAT-Net: Learning Canonical Appearance Transformations
Code to accompany our paper "How to Train a CAT: Learning Canonical Appearance Transformations for Direct Visual Localization Under Illumination Change".
Dependencies
numpy
pytorch + torchvision (0.3.0)
PIL
visdom
pyslam + liegroups (optional, for running odometry/localization experiments)
Running the demo experiment
Download the ETHL dataset from here.
Update run_cat_experiment.py
to point to the appropriate local paths.
In a terminal run python3 -m visdom.server -port 8097
to start the visualization server.
In another terminal run python3 run_cat_experiment.py
to start training.
Tune in to localhost:8097
and watch the fun.
Running the localization experiments
Note: the scripts referenced here are from an older version of the repository and may need some adjustments. 1. Ensure the pyslam and [liegroups] packages(https://github.com/utiasSTARS/liegroups) are installed 2. In a terminal open the localization
directory and run python3 run_localization_[dataset].py
3. You can compute localization errors against ground truth using the compute_localization_errors.py
script.
Pre-trained models
Coming soon!
Citation
If you use this code in your research, please cite:
@article{2018_Clement_Learning,
author = {Lee Clement and Jonathan Kelly},
journal = {{IEEE} Robotics and Automation Letters},
link = {https://arxiv.org/abs/1709.03009},
title = {How to Train a {CAT}: Learning Canonical Appearance Transformations for Direct Visual Localization Under Illumination Change},
year = {2018}
}