Thursday, January 22, 2026

DiffuCoder: Understanding and Bettering Masked Diffusion Fashions for Code Era


Diffusion giant language fashions (dLLMs) are compelling options to autoregressive (AR) fashions as a result of their denoising fashions function over your entire sequence. The worldwide planning and iterative refinement options of dLLMs are significantly helpful for code technology. Nevertheless, present coaching and inference mechanisms for dLLMs in coding are nonetheless under-explored. To demystify the decoding conduct of dLLMs and unlock their potential for coding, we systematically examine their denoising processes and reinforcement studying (RL) strategies. We practice a 7B dLLM, textbf{DiffuCoder}, on 130B tokens of code. Utilizing this mannequin as a testbed, we analyze its decoding conduct, revealing the way it differs from that of AR fashions: (1) dLLMs can determine how causal their technology must be with out counting on semi-AR decoding, and (2) rising the sampling temperature diversifies not solely token selections but additionally their technology order. This range creates a wealthy search area for RL rollouts. For RL coaching, to scale back the variance of token log-likelihood estimates and preserve coaching effectivity, we suggest textbf{coupled-GRPO}, a novel sampling scheme that constructs complementary masks noise for completions utilized in coaching. In our experiments, coupled-GRPO considerably improves DiffuCoder’s efficiency on code technology benchmarks (+4.4% on EvalPlus) and reduces reliance on AR bias throughout decoding. Our work gives deeper perception into the equipment of dLLM technology and gives an efficient, diffusion-native RL coaching framework.

Related Articles

Latest Articles