Tab-CoT: Zero-shot Tabular Chain of Thought

Jin Ziqi, Wei Lu

Findings: Question Answering Findings Paper

Session 4: Question Answering (Virtual Poster)
Conference Room: Pier 7&8
Conference Time: July 11, 11:00-12:30 (EDT) (America/Toronto)
Global Time: July 11, Session 4 (15:00-16:30 UTC)
Keywords: logical reasoning, reasoning, math qa
TLDR: The chain-of-though (CoT) prompting methods were successful in various natural language processing (NLP) tasks thanks to their ability to unveil the underlying complex reasoning processes. Such reasoning processes typically exhibit highly structured steps. Recent efforts also started investigating m...
You can open the #paper-P3540 channel in a separate window.
Abstract: The chain-of-though (CoT) prompting methods were successful in various natural language processing (NLP) tasks thanks to their ability to unveil the underlying complex reasoning processes. Such reasoning processes typically exhibit highly structured steps. Recent efforts also started investigating methods to encourage more structured reasoning procedures to be captured (cite least to most). In this work, we propose Tab-CoT, a novel tabular-format CoT prompting method, which allows the complex reasoning process to be explicitly modeled in a highly structured manner. Despite its simplicity, we show that our approach is capable of performing reasoning across multiple dimensions (i.e., both rows and columns). We demonstrate our approach's strong zero-shot and few-shot capabilities through extensive experiments on a range of reasoning tasks.