I Spy a Metaphor: Large Language Models and Diffusion Models Co-Create Visual Metaphors
Tuhin Chakrabarty, Arkadiy Saakyan, Olivia Winn, Artemis Panagopoulou, Yue Yang, Marianna Apidianaki, Smaranda Muresan
1st Workshop on Natural Language Reasoning and Structured Explanations (@ACL 2023) Long Paper
TLDR:
Visual metaphors are powerful rhetorical devices used to persuade or communicate creative ideas through images. Similar to linguistic metaphors, they convey meaning implicitly through symbolism and juxtaposition of the symbols. We propose a new task of generating visual metaphors from linguistic met
You can open the
#paper-ACL_58
channel in a separate window.
Abstract:
Visual metaphors are powerful rhetorical devices used to persuade or communicate creative ideas through images. Similar to linguistic metaphors, they convey meaning implicitly through symbolism and juxtaposition of the symbols. We propose a new task of generating visual metaphors from linguistic metaphors. This is a challenging task for diffusion-based text-to-image models, such as DALL-E-2, since it requires the ability to model implicit meaning and compositionality. We propose to solve the task through the collaboration between Large Language Models and Diffusion Models. We use GPT-3 with Chain-of-Thought prompting to generate text that represents a visual elaboration of the linguistic metaphor, containing the implicit meaning and relevant objects, which is then used as input to the diffusion-based text-to-image models. Using a human-AI collaboration framework, where humans interact both with the LLM and the top-performing diffusion model, we create a high-quality dataset containing 6,476 visual metaphors. Evaluation by professional illustrators show the promise of LLM-Diffusion Model collaboration for this task. We also perform an intrinsic and an extrinsic evaluation using a downstream task: visual entailment. Fine-tuning a state-of-the-art vision-language model on our dataset leads to 23-point improvement in accuracy compared to its performance when finetuned on SNLI-VE, a large-scale visual entailment dataset.