Thursday, May 14, 2026

Nous Analysis Releases Token Superposition Coaching to Velocity Up LLM Pre-Coaching by As much as 2.5x Throughout 270M to 10B Parameter Fashions


Pre-training massive language fashions is pricey sufficient that even modest effectivity enhancements can translate into significant value and time financial savings. Nous Analysis is releasing Token Superposition Coaching (TST), a technique that considerably reduces pre-training wall-clock time at mounted compute with out touching the mannequin structure, optimizer, tokenizer, parallelism technique, or coaching information.

On the 10B-A1B mixture-of-experts scale, TST reaches a decrease ultimate coaching loss than a matched-FLOPs baseline whereas consuming 4,768 B200-GPU-hours versus the baseline’s 12,311 — roughly a 2.5x discount in whole pre-training time.

https://arxiv.org/pdf/2605.06546

The Downside TST is Fixing

Trendy LLM pre-training is closely data-driven. Current coaching regimes routinely overtrain effectively past compute-optimal estimates, and uncooked textual content throughput. How a lot information a mannequin can course of per FLOP has develop into a key lever. Subword tokenizers like BPE already enhance throughput by compressing sequences; and the analysis suggests a lot of the BPE benefit over byte-level fashions comes merely from shorter sequences, which implies the mannequin sees extra textual content per unit of compute.

TST asks whether or not that throughput lever may be pulled additional throughout coaching, independently of the tokenizer and with out completely altering the mannequin.

How TST Works: Two Phases

TST modifies the usual pre-training loop in two sequential phases:

Section 1 — Superposition: For the primary r fraction of whole coaching steps (the paper finds r ∈ [0.2, 0.4] to be near optimum throughout examined scales), the mannequin doesn’t obtain particular person tokens. As an alternative, the enter sequence of size L is segmented into non-overlapping luggage of s contiguous tokens. Within the embedding layer, every bag is collapsed right into a single latent “s-token” by averaging the s token embeddings. The transformer then processes a sequence of size L/s.

Crucially, every TST step is saved equal-FLOPs to a normal coaching step by rising the information sequence size by s instances in the course of the superposition section. As a result of every latent place corresponds to s supply tokens, the mannequin ingests s instances as a lot textual content per unit of compute — that is what drives the throughput achieve.

On the output aspect, every latent place predicts the subsequent bag of s tokens somewhat than a single subsequent token. The usual cross-entropy loss is changed with a multi-hot cross-entropy (MCE) loss, which assigns equal chance mass 1/s to every token within the goal bag. The MCE loss reduces to a easy imply of normal cross-entropy phrases over the s targets — it may be applied utilizing the prevailing fused CE kernels already current in any main pre-training library, with out writing a brand new kernel or including an auxiliary head.

Section 2 — Restoration: After the superposition section, coaching resumes from the saved checkpoint with customary next-token prediction for the remaining 1 - r steps. The TST code is totally eliminated at this boundary to keep away from any experimental contamination. A transient loss spike happens on the transition, usually between 1 and a pair of nats, which resolves inside a number of thousand steps. After that, the recovered mannequin crosses under the equal-FLOPs baseline and stays there.

The mannequin produced on the finish of Section 2 is architecturally similar to 1 produced by standard pre-training, with the identical next-token prediction inference conduct.

What the Experiments Present

TST was validated at 4 scales: 270M and 600M dense (SmolLM2 shapes tailored to the Llama3 modeling code, with the Llama3-8B tokenizer and untied enter/output embeddings — which makes the 270M mannequin equal in measurement to SmolLM2-135M and the 600M to SmolLM2-360M), 3B dense (SmolLM3 form), and a 10B-A1B MoE within the Qwen3 household. Coaching used the DCLM dataset for the smaller runs and a 50/50 mixture of DCLM and FineWeb-Edu for the MoE run. All runs used AdamW with the Warmup-Secure-Decay studying fee schedule and have been run in TorchTitan below FSDP parallelism, on 64 NVIDIA B200 GPUs for the bigger fashions and eight B200 GPUs for the smaller ones.

On the 3B scale with bag measurement s = 6 and step ratio r = 0.3, TST at 20,000 steps reaches a ultimate lack of 2.676 — practically matching a 36,000-step baseline at 2.677 — whereas utilizing 247 B200-GPU-hours versus 443. The 20k-step TST run scores 62.4 on HellaSwag and 66.3 on ARC-Straightforward, versus 62.3 and 65.9 for the 36k baseline.

On the 10B-A1B MoE scale with s = 16 and r ≈ 0.25, the TST run processes 2T information tokens and achieves a ultimate lack of 2.236, under the baseline’s 2.252 after 1.05T tokens, whereas beating it on all 4 reported benchmarks: HellaSwag (71.2 vs. 70.1), ARC-Straightforward (74.2 vs. 73.8), ARC-Problem (47.3 vs. 46.3), and MMLU (39.0 vs. 37.4).

The analysis crew presents three comparability views in opposition to the baseline — equal-FLOPs, equal-loss, and equal-data. Underneath equal-FLOPs and equal-loss circumstances, TST persistently wins. Underneath equal whole token consumption, the baseline wins, as a result of TST’s efficient compute funds per information token is smaller. This is a crucial boundary situation that determines the place TST applies.

Two Distinct Mechanisms

An ablation research isolates the input-side and output-side elements. Each independently outperform the baseline; combining them produces additional enchancment with out indicators of interference. The authors interpret this as proof that TST is 2 orthogonal mechanisms somewhat than a single trick.

The output-side mechanism — next-bag-of-tokens prediction — is conceptually associated to multi-token prediction (MTP). Not like MTP, which provides okay impartial prediction heads and further parameters, TST retains a single output head and replaces solely the goal. This makes it the least costly member of a rising class of future-signal auxiliary aims. Not like MTP, it reveals constant good points throughout all examined scales together with small fashions the place MTP has been proven to degrade efficiency.

The input-side mechanism has no direct analog within the latest pre-training literature. The analysis crew affords two believable explanations: it could implicitly regularize the embedding geometry (since many random s-grams of tokens should stay linearly separable as soon as averaged), or it could act as a type of pre-pre-training, exposing the mannequin to a coarser model of the actual information earlier than fine-resolution language modeling begins.

A focused ablation instantly exams what occurs when illustration continuity is damaged. The analysis crew runs a 3B TST experiment the place the enter embedding and output LM head are randomly re-initialized in the beginning of Section 2. The end result: ultimate loss jumps to 2.938 — worse than each the TST run (2.676) and the usual baseline (2.808). The Section 1 TST steps contributed nothing to the ultimate mannequin. This confirms that shared representations throughout each phases are usually not incidental to TST’s success — they’re what makes it work.

Marktechpost’s Visible Explainer

Token Superposition Coaching — Sensible Information
arXiv 2605.06546

01 / Overview

What Is Token Superposition Coaching?

Token Superposition Coaching (TST) is a two-phase pre-training methodology from Nous Analysis that will increase token throughput per FLOP with out altering the mannequin structure, optimizer, tokenizer, parallelism, or coaching information.

The core thought: As an alternative of feeding one token at a time, common s contiguous token embeddings into one “s-token,” practice on that for the primary r fraction of steps, then change again to straightforward next-token prediction. The ultimate mannequin is architecturally similar to 1 skilled usually.

  • Section 1 (Superposition) — mannequin reads luggage of s tokens, predicts the subsequent bag
  • Section 2 (Restoration) — customary next-token prediction resumes from the checkpoint
  • Inference — utterly unchanged; no new heads, no new parameters
  • Validated at 270M, 600M, 3B dense and 10B–A1B MoE

TST trades compute effectivity for larger information consumption. Finest fitted to compute-bound pre-training, not data-bound.

02 / Section 1

Section 1 — The Superposition Section

For the primary r fraction of whole coaching steps, the enter sequence of size L is break up into non-overlapping luggage of s contiguous tokens. Their embeddings are averaged right into a single latent s-token. The transformer processes a sequence of size L/s — however every place corresponds to s actual tokens, so throughput is larger on the identical FLOPs.

Equal-FLOPs trick: To maintain every step equal-FLOPs to baseline, the information sequence size is elevated by — not the batch measurement. Each TST step prices the identical compute as a normal step.

On the output aspect, the loss goal shifts from a single subsequent token to the subsequent bag of s tokens. The multi-hot cross-entropy (MCE) loss assigns equal chance mass 1/s to every token within the goal bag:

# L_MCE = imply of s customary CE phrases
for i in vary(superposition_bag_size):
    goal = labels[..., i].flatten(0, 1)
    loss += torch.nn.useful.cross_entropy(pred, goal)
loss = loss / superposition_bag_size

No new kernel wanted — reuses the prevailing fused CE kernel in your pre-training library.

03 / Section 2

Section 2 — The Restoration Section

After r × total_steps of superposition coaching, resume from the checkpoint with the TST code totally eliminated. Normal next-token prediction runs for the remaining (1 — r) × total_steps.

What occurs on the change: A loss spike of 1–2 nats happens on the section boundary. It resolves inside a number of thousand steps. After that, the mannequin crosses under the equal-FLOPs baseline and stays there.

  • Take away TST code totally — don’t hold it as an auxiliary loss throughout Section 2
  • Do not re-initialize the enter embedding or LM head on the boundary
  • Shared representations throughout each phases are what make TST work

Re-initializing the embedding or LM head on the section boundary utterly breaks TST. In a 3B ablation, this raised ultimate loss from 2.676 to 2.938 — worse than the two.808 baseline. The Section 1 steps contributed nothing.

04 / Implementation

PyTorch Implementation

Three modifications to the usual coaching loop — enter folding, averaged embedding lookup, and MCE loss.

# 1. Enter folding (inside practice loop)
if superposition_bag_size shouldn't be None and superposition_bag_size > 1:
    bs, seq = inputs.form
    inputs = inputs.reshape(
        bs, seq // superposition_bag_size, superposition_bag_size
    )
# 2. Averaged embedding lookup (inside mannequin ahead)
if len(tokens.form) == 3:
    bs, sp_seq, superposition_bag_size = tokens.form
    h = self.tok_embeddings(tokens[..., 0]).float()
    for i in vary(1, superposition_bag_size):
        h = h + self.tok_embeddings(tokens[..., i]).float()
    h = (h / superposition_bag_size).to(h_dtype)
else:
    h = self.tok_embeddings(tokens)

Notice: Sum in float32 for numerical precision, then forged again to coaching dtype. The embedding layer is the one forward-pass change.

05 / Hyperparameters

Tuning Bag Dimension s and Step Ratio r

Two hyperparameters management TST. Each have well-defined sensible ranges validated throughout mannequin scales.

Step Ratio r
0.2 — 0.4
Fraction of whole steps run in superposition mode. Strong throughout all examined scales. Beneath 0.2, throughput achieve is just too small. Above 0.5, Section 2 can’t totally recuperate.

Bag Dimension s
3 — 16
U-shaped optimum that shifts with mannequin measurement. Begin within the flat basin; overshooting makes the bag goal too lossy to recuperate from.

Mannequin Dimension Really helpful s Really helpful r
270M 3 — 8 0.2 — 0.4
600M 6 — 10 0.2 — 0.4
3B 6 (examined) 0.3 (examined)
10B–A1B MoE 16 (examined) ∼0.25 (examined)

Massive bag sizes (s ≥ 8): Swap from uniform MCE loss weighting to power-law weighting (1/i per place). Motivated by mutual info between token pairs decaying as an influence legislation with distance (fitted exponent okay ≈ −1.25 on DCLM).

06 / Detrimental Outcomes

What Doesn’t Work

The paper paperwork a number of variants that have been examined and failed. Save your self the compute.

  • Positional encodings earlier than averaging — including RoPE or sinusoidal encodings to tokens earlier than the imply persistently harm efficiency. Inside-bag permutation invariance seems to be a function, not a bug.
  • RoPE rescaling at section transition — accelerated early Section 2 restoration however typically raised ultimate loss. Go away RoPE unchanged throughout the boundary.
  • s impartial heads — changing the only MCE head with s separate heads predicting s positions gave no constant achieve at larger parameter value and implementation complexity.
  • Binary cross-entropy / hinge loss — each considerably underperformed the MCE formulation and even fell under the baseline.
  • Retaining TST head in Section 2 — not but benchmarked however recognized as future work; don’t assume it helps.

Backside line: The best model works greatest — imply embeddings in, imply CE loss out, onerous change on the section boundary, no additional parameters.

07 / Outcomes

Key Outcomes & When to Use TST

At equal wall-clock — identical compute, higher loss:

Scale B200-hrs TST Loss Baseline Loss
3B dense 247 2.676 2.808
10B–A1B MoE 4,768 2.236 2.252 (@ 12,311 hrs)

At equal ultimate loss — wall-clock saved:

Scale TST (B200-hrs) Baseline (B200-hrs) Speedup
3B dense 247 443 ∼1.8×
10B–A1B MoE 4,768 12,311 ∼2.5×

Use TST when
✓ You might be compute-bound
✓ You may have ample information
✓ You need decrease loss on the identical FLOPs
✓ You want the identical inference mannequin

Keep away from TST when
✕ Knowledge is the bottleneck (TST makes use of s× extra tokens in Section 1)
✕ You examine at equal token consumption
✕ Underneath equal-data circumstances, baseline wins

Paper: arXiv 2605.06546  •  nousresearch.com/token-superposition

Key Takeaways

  • Nous Analysis’s Token Superposition Coaching (TST) cuts LLM pre-training time by as much as 2.5x at matched FLOPs — no structure, tokenizer, or optimizer modifications required.
  • Section 1 averages contiguous token embeddings into luggage and predicts the subsequent bag through multi-hot cross-entropy; Section 2 reverts to straightforward next-token prediction from the identical checkpoint.
  • Validated at 270M, 600M, 3B dense, and 10B-A1B MoE — TST beats the baseline on loss and downstream evals (HellaSwag, ARC, MMLU) throughout all scales.
  • Optimum hyperparameters: bag measurement s ∈ [3–8] for smaller fashions, step ratio r ∈ [0.2, 0.4]; shared embeddings throughout each phases are important — re-initializing them makes TST worse than the baseline.
  • Commerce-off: TST consumes extra uncooked information tokens per compute funds — greatest fitted to compute-bound coaching; the output-only variant is the choice for data-bound settings.

Try the Paper and VentureAdditionally, be at liberty to comply with us on Twitter and don’t overlook to hitch our 150k+ ML SubReddit and Subscribe to our Publication. Wait! are you on telegram? now you possibly can be part of us on telegram as effectively.

Must accomplice with us for selling your GitHub Repo OR Hugging Face Web page OR Product Launch OR Webinar and many others.? Join with us


Related Articles

Latest Articles