CN
China’s Tsinghua University introduces a “cumulative reasoning” framework for large language models
Chinese article by 爱集微
English Editor 张未名
10-07 17:57

By Greg Gao

(JW Insights) Oct 7 -- A research team led by Yao Qizhi and Yuan Yang from China’s prestigious Tsinghua University introduced the “Cumulative Reasoning (CR)” framework On September 27, significantly enhancing the accuracy of Large Language Models (LLMs) in solving complex reasoning tasks, according to the university.

Specifically, it achieved a 98% accuracy rate in logical reasoning and 24-point problems and a relative improvement of 42% in mathematical problems (MATH Level 5).

Despite substantial progress in LLMs, they still struggle to provide stable and accurate answers when faced with highly complex reasoning tasks. To overcome this limitation, previous researchers have proposed several thinking frameworks like “Chain of Thought (CoT)” and “Tree of Thought (ToT)” that mimic human “deliberative” and “logical” thinking processes. However, these methods did not include a storage mechanism for intermediate thinking results, preventing LLMs from more comprehensively simulating the complexity of human thought processes. To address this research gap, the team introduced the “Cumulative Reasoning” framework, aiming to model the thinking process more generally.

The framework employs three different LLMs to tackle complex reasoning problems: the Proposer, the Verifier, and the Reporter. 

The research team tested the “Cumulative Reasoning” framework on the FOLIO wiki, AutoTNLI, 24-point game, and MATH datasets. Results indicated that the novel framework consistently outperformed existing methods, showing an improvement of up to 9.3% on the FOLIO wiki and AutoTNLI datasets. 

linkedin twitter facebook line
Copy succeeded
link