DISC: Dynamic Decomposition Improves LLM Inference Scaling (DL4C)

Publication Date: 4/28/2025

Event: Third Workshop on Deep Learning for Code (DL4C) at ICLR 2025

Reference: pp. 1-33, 2025

Authors: Jonathan Light, Rensselaer Polytechnic Institute; Wei Cheng, NEC Laboratories America, Inc.; Wu Yue, Princeton University; Masafumi Oyamada, NEC Corporation; Mengdi Wang, Princeton University; Santiago Paternain, Rensselaer Polytechnic Institute; Haifeng Chen, NEC Laboratories America, Inc.

Abstract: Inference scaling methods often rely on decomposing problems into steps, followed by sampling and selecting the best next steps. However, these steps and their sizes are typically fixed or depend on domain knowledge. We propose dynamic decomposition, a method that adaptively and automatically breaks down solution and reasoning traces into manageable steps during inference. By allocating compute more effectively—particularly by subdividing challenging steps and sampling them more frequently—dynamic decomposition significantly enhances inference efficiency. Experiments on benchmarks such as APPS, MATH, and LiveCodeBench demonstrate that dynamic decomposition outperforms static approaches, including token-level, sentence-level, and single-step decompositions. These findings highlight the potential of dynamic decomposition to improve a wide range of inference scaling techniques.

Publication Link: https://openreview.net/forum?id=wRpIJ9Jcth