Does GPT-3 Grasp Metaphors? Identifying Metaphor Mappings with Generative Language Models
Lennart Wachowiak, Dagmar Gromann
Main: Semantics: Lexical Main-oral Paper
Session 5: Semantics: Lexical (Oral)
Conference Room: Pier 2&3
Conference Time: July 11, 16:15-17:45 (EDT) (America/Toronto)
Global Time: July 11, Session 5 (20:15-21:45 UTC)
Keywords:
metaphor
TLDR:
Conceptual metaphors present a powerful cognitive vehicle to transfer knowledge structures from a source to a target domain.
Prior neural approaches focus on detecting whether natural language sequences are metaphoric or literal. We believe that to truly probe metaphoric knowledge in pre-trained la...
You can open the
#paper-P2116
channel in a separate window.
Abstract:
Conceptual metaphors present a powerful cognitive vehicle to transfer knowledge structures from a source to a target domain.
Prior neural approaches focus on detecting whether natural language sequences are metaphoric or literal. We believe that to truly probe metaphoric knowledge in pre-trained language models, their capability to detect this transfer should be investigated.
To this end, this paper proposes to probe the ability of GPT-3 to detect metaphoric language and predict the metaphor's source domain without any pre-set domains. We experiment with different training sample configurations for fine-tuning and few-shot prompting on two distinct datasets. When provided 12 few-shot samples in the prompt, GPT-3 generates the correct source domain for a new sample with an accuracy of 65.15\% in English and 34.65\% in Spanish. GPT's most common error is a hallucinated source domain for which no indicator is present in the sentence. Other common errors include identifying a sequence as literal even though a metaphor is present and predicting the wrong source domain based on specific words in the sequence that are not metaphorically related to the target domain.