Exploring the Role of Reasoning Structures for Constructing Proofs in Multi-Step Natural Language Reasoning with Large Language Models
When performing complex multi-step reasoning tasks, the ability of Large Language Models (LLMs) to derive structured intermediate proof steps is important for ensuring that themodels truly perform the desired reasoning and for improving models explainability. This paper is centred around a focused study: whether the current state-of-the-art generalist LLMs canleverage the structures in a few examples to better construct the proof structures with incontext learning. Our study specifically focuses on structure-aware demonstration and structureawarepruning. We demonstrate that they both help improve performance. A detailed analysis is provided to help understand the results.