If you would just like to try out an example model, then you can find the model used in the SegNet webdemo in the folder Example_Models/. You will need to download the weights separately using the link in the SegNet Model Zoo.
First open Scripts/webcam_demo.py and edit line 14 to
match the path to your installation of SegNet. You will also need a
webcam, or alternatively edit line 39 to input a video file instead. To
run the demo use the command:
docker build -t bvlc/caffe:cpu ./cpu
# check if working
docker run -ti bvlc/caffe:cpu caffe --version
# get a bash in container to run examples
docker run -ti --volume=$(pwd):/SegNet -u $(id -u):$(id -g) bvlc/caffe:cpu bash
to run caffe on the GPU:
docker build -t bvlc/caffe:gpu ./gpu
# check if working
docker run -ti bvlc/caffe:gpu caffe device_query -gpu 0
# get a bash in container to run examples
docker run -ti --volume=$(pwd):/SegNet -u $(id -u):$(id -g) bvlc/caffe:gpu bash
Example Models
A number of example models for indoor and outdoor road scene understanding can be found in the SegNet Model Zoo.
Publications
For more information about the SegNet architecture:
http://arxiv.org/abs/1511.02680Alex Kendall, Vijay Badrinarayanan and Roberto Cipolla "Bayesian SegNet:
Model Uncertainty in Deep Convolutional Encoder-Decoder Architectures
for Scene Understanding." arXiv preprint arXiv:1511.02680, 2015.
http://arxiv.org/abs/1511.00561Vijay Badrinarayanan, Alex Kendall and Roberto Cipolla "SegNet: A Deep
Convolutional Encoder-Decoder Architecture for Image Segmentation."
PAMI, 2017.
License
This software is released under a creative commons license which
allows for personal and research use only. For a commercial license
please contact the authors. You can view a license summary here:http://creativecommons.org/licenses/by-nc/4.0/