Neural-symbolic Contrastive Learning for Cross-domain Inference
Mingyue Liu, Jialin Yu, Hao Cui, Sara Uckelman, Yang Long
1st Workshop on Natural Language Reasoning and Structured Explanations (@ACL 2023) Long Paper
TLDR:
It has been suggested in literature that large pre-trained language models (PLMs) are able to suppress human-level performance for natural language inference (NLI) tasks. However, the failure of learning the underlying generalizations and the inconsistency to small textual perturbations rise doubt a
You can open the
#paper-ACL_95
channel in a separate window.
Abstract:
It has been suggested in literature that large pre-trained language models (PLMs) are able to suppress human-level performance for natural language inference (NLI) tasks. However, the failure of learning the underlying generalizations and the inconsistency to small textual perturbations rise doubt about whether models rely on adopting shallow heuristics to guess the correct label. To mitigate this issue, we propose a neural-symbolic contrastive learning framework inspired by Inductive Logic Programming (ILP) to better capture logical relationships from data. Unlike the usual methods for NLI tasks, our approach represents data as logic programs, sets of logic rules. We aim to learn an embedding space in which the examples share as various as possible textual information with as similar as possible underlying logical meanings that are close together, and vice versa. Experimental results affirm this approach's ability to enhance the model's transferability performance.