A new study combines neural networks with symbolic logic to improve multi-step reasoning in large language models
In a new research paper, scientists from the University of Hamburg explore an innovative neurosymbolic technique to enhance logical reasoning in large language models (LLMs). By integrating neural networks with principles of symbolic logic, they have developed a method that significantly boosts the reasoning prowess of LLMs.
While LLMs like GPT-3 exhibit extensive knowledge and impressive language proficiency, their reasoning ability is far from perfect. When confronted with scenarios requiring coherent, multi-step inferences, these models struggle and are prone to logical lapses. Their responses frequently contradict themselves or make unjustified leaps lacking a sound basis.
Without an innate capacity for structured logical reasoning, LLMs hallucinate incorrect or nonsensical information. Their reasoning lacks the constraints of formal logic that guide systematic thinking in humans.
Bridging this gap requires equipping LLMs with the ability to methodically verify the validity of each step in a multi-step inference process before drawing conclusions.
To address this problem, the Hamburg researchers turned to neurosymbolic AI — combining neural networks with symbolic logic representations.
Their proposed technique, LogiCoT, enhances LLMs with logical reasoning capabilities using a simple but effective principle called reductio ad absurdum.
Reductio ad absurdum (or proof by contradiction) is a mode of argument that validates a proposition by assuming its negation leads to absurd or contradictory conclusions.
By methodically teasing out these inherent contradictions, the truth of the original proposition gets firmly established.