TY - JOUR AU - Li, Chengxi AU - Harrison, Brent PY - 2021/04/18 Y2 - 2024/03/29 TI - 3M: Multi-style image caption generation using Multi-modality features under Multi-UPDOWN model JF - The International FLAIRS Conference Proceedings JA - FLAIRS VL - 34 IS - 0 SE - Main Track Proceedings DO - 10.32473/flairs.v34i1.128380 UR - https://journals.flvc.org/FLAIRS/article/view/128380 SP - AB - <p>In this paper, we build a multi-style generative model for stylish image captioning which uses multi-modality image features, ResNeXt features, and text features generated by DenseCap. We propose the 3M model, a Multi-UPDOWN caption model that encodes multi-modality features and decodes them into captions. We demonstrate the effectiveness of our model on generating human-like captions by examining its performance on two datasets, the PERSONALITY-CAPTIONS dataset, and the FlickrStyle10K dataset. We compare against a variety of state-of-the-art baselines on various automatic NLP metrics such as BLEU, ROUGE-L, CIDEr, SPICE, etc \footnote{code will be available at https://github.com/cici-ai-club/3M}. A qualitative study has also been done to verify our 3M model can be used for generating different stylized captions.</p> ER -