3M: Multi-style image caption generation using Multi-modality features under Multi-UPDOWN model

Authors

  • Chengxi Li University of Kentucky
  • Brent Harrison

DOI:

https://doi.org/10.32473/flairs.v34i1.128380

Keywords:

Multi-style image caption, Multi-modality, Multi-UPDOWN

Abstract

In this paper, we build a multi-style generative model for stylish image captioning which uses multi-modality image features, ResNeXt features, and text features generated by DenseCap. We propose the 3M model, a Multi-UPDOWN caption model that encodes multi-modality features and decodes them into captions. We demonstrate the effectiveness of our model on generating human-like captions by examining its performance on two datasets, the PERSONALITY-CAPTIONS dataset, and the FlickrStyle10K dataset. We compare against a variety of state-of-the-art baselines on various automatic NLP metrics such as BLEU, ROUGE-L, CIDEr, SPICE, etc \footnote{code will be available at https://github.com/cici-ai-club/3M}. A qualitative study has also been done to verify our 3M model can be used for generating different stylized captions.

Downloads

Published

18-04-2021

How to Cite

Li, C., & Harrison, B. (2021). 3M: Multi-style image caption generation using Multi-modality features under Multi-UPDOWN model. The International FLAIRS Conference Proceedings, 34. https://doi.org/10.32473/flairs.v34i1.128380

Issue

Section

Main Track Proceedings