3M: Multi-style image caption generation using Multi-modality features under Multi-UPDOWN model

作者

  • Chengxi Li University of Kentucky
  • Brent Harrison

##plugins.pubIds.doi.readerDisplayName##:

https://doi.org/10.32473/flairs.v34i1.128380

关键词:

Multi-style image caption, Multi-modality, Multi-UPDOWN

摘要

In this paper, we build a multi-style generative model for stylish image captioning which uses multi-modality image features, ResNeXt features, and text features generated by DenseCap. We propose the 3M model, a Multi-UPDOWN caption model that encodes multi-modality features and decodes them into captions. We demonstrate the effectiveness of our model on generating human-like captions by examining its performance on two datasets, the PERSONALITY-CAPTIONS dataset, and the FlickrStyle10K dataset. We compare against a variety of state-of-the-art baselines on various automatic NLP metrics such as BLEU, ROUGE-L, CIDEr, SPICE, etc \footnote{code will be available at https://github.com/cici-ai-club/3M}. A qualitative study has also been done to verify our 3M model can be used for generating different stylized captions.

##submission.downloads##

已出版

2021-04-18

栏目

Main Track Proceedings