Wednesday, January 28, 2026

Studying to Cause as Motion Abstractions with Scalable Mid-Coaching RL


Massive language fashions excel with reinforcement studying (RL), however absolutely unlocking this potential requires a mid-training stage. An efficient mid-training section ought to establish a compact set of helpful actions and allow quick choice amongst them by means of on-line RL. We formalize this instinct by presenting the primary theoretical outcome on how mid-training shapes post-training: it characterizes an motion subspace that minimizes each the worth approximation error from pruning and the RL error throughout subsequent planning. Our evaluation reveals two key determinants of mid-training effectiveness: pruning effectivity, which shapes the prior of the preliminary RL coverage, and its impression on RL convergence, which governs the extent to which that coverage could be improved through on-line interactions. These outcomes recommend that mid-training is best when the choice house is compact and the efficient horizon is brief, highlighting the significance of working within the house of motion abstractions moderately than primitive actions. Constructing on these insights, we suggest Reasoning as Motion Abstractions (RA3), a scalable mid-training algorithm. Particularly, we derive a sequential variational decrease sure and optimize it by iteratively discovering temporally-consistent latent constructions through RL, adopted by fine-tuning on the bootstrapped information. Experiments on code era duties reveal the effectiveness of our method. Throughout a number of base fashions, RA3 improves the common efficiency on HumanEval and MBPP by 8 and 4 factors over the bottom mannequin and the next-token prediction baseline. Moreover, RA3 achieves quicker convergence and better asymptotic efficiency in RLVR on HumanEval+, MBPP+, LiveCodeBench, and Codeforces.

Related Articles

Latest Articles