Friday, April 3, 2026

Personalised Group Relative Coverage Optimization for Heterogenous Desire Alignment


Regardless of their refined general-purpose capabilities, Giant Language Fashions (LLMs) typically fail to align with numerous particular person preferences as a result of customary post-training strategies, like Reinforcement Studying with Human Suggestions (RLHF), optimize for a single, international goal. Whereas Group Relative Coverage Optimization (GRPO) is a extensively adopted on-policy reinforcement studying framework, its group-based normalization implicitly assumes that each one samples are exchangeable, inheriting this limitation in personalised settings. This assumption conflates distinct consumer reward distributions and systematically biases studying towards dominant preferences whereas suppressing minority alerts. To handle this, we introduce Personalised GRPO (P-GRPO), a novel alignment framework that decouples benefit estimation from quick batch statistics. By normalizing benefits in opposition to preference-group-specific reward histories somewhat than the concurrent technology group, P-GRPO preserves the contrastive sign mandatory for studying distinct preferences. We consider P-GRPO throughout numerous duties and discover that it constantly achieves quicker convergence and better rewards than customary GRPO, thereby enhancing its skill to get better and align with heterogeneous choice alerts. Our outcomes display that accounting for reward heterogeneity on the optimization degree is crucial for constructing fashions that faithfully align with numerous human preferences with out sacrificing normal capabilities.

Related Articles

Latest Articles