资源算法Video_Captioning.pytorch

Video_Captioning.pytorch

2020-01-06 | |  82 |   0 |   0

Video Captioning

Dependencies

(Check out the coco-caption and cider projects into your working directory)

Data

Obtain the dataset you need:

Getting started

Generate metadata

  1. run func_standalize_format

  2. run func_preprocess_datainfo

  3. run func_build_vocab

  4. run func_create_sequencelabel

  5. run func_convert_datainfo2cocofmt

  6. run func_compute_ciderdf # Pre-compute document frequency for CIDEr computation

  7. run func_compute_evalscores # Pre-compute evaluation scores (BLEU_4, CIDEr, METEOR, ROUGE_L) for each caption

  8. run func_extract_video_features # extract video features

Training

Please refer to the opts.py file for the set of available train/test options

# Train XE model./train.sh 0 [GPUIDs]
# Train CST_GT_None/WXE model./train.sh 1 [GPUIDs]
# Train CST_MS_Greedy model (using greedy baseline)./train.sh 2 [GPUIDs]
# Train CST_MS_SCB model (using SCB baseline, where SCB is computed from GT captions)./train.sh 3 [GPUIDs]
#Train CST_MS_SCB(*) model (using SCB baseline, where SCB is computed from model sampled captions)./train.sh 4 [GPUIDs]

Testing

./test.sh 0 [GPUIDs]

Acknowledgements


上一篇:Capsule-Networks-Chinese-Tutorial

下一篇:video-caption-openNMT.pytorch

用户评价
全部评价

热门资源

  • Keras-ResNeXt

    Keras ResNeXt Implementation of ResNeXt models...

  • seetafaceJNI

    项目介绍 基于中科院seetaface2进行封装的JAVA...

  • spark-corenlp

    This package wraps Stanford CoreNLP annotators ...

  • capsnet-with-caps...

    CapsNet with capsule-wise convolution Project ...

  • inferno-boilerplate

    This is a very basic boilerplate example for pe...