Thursday, March 19, 2026

LumberChunker: Lengthy-Type Narrative Doc Segmentation – Machine Studying Weblog | ML@CMU


Hyperlinks:
Paper | Code | Knowledge

LumberChunker lets an LLM determine the place a protracted story needs to be break up, creating extra pure chunks that assist Retrieval Augmented Era (RAG) methods retrieve the fitting info.

Introduction

Lengthy-form narrative paperwork often have an specific construction, akin to chapters or sections, however these items are sometimes too broad for retrieval duties. At a decrease degree, essential semantic shifts occur inside these bigger segments with none seen structural break. After we break up textual content solely by formatting cues, like paragraphs or mounted token home windows, passages that belong to the identical narrative unit could also be separated, whereas unrelated content material may be grouped collectively. This misalignment between construction and that means produces chunks that include incomplete or combined context, which reduces retrieval high quality and impacts downstream RAG efficiency. Because of this, segmentation ought to purpose to create chunks which can be semantically unbiased, slightly than relying solely on doc construction.

So how can we protect the story’s circulate and nonetheless maintain chunking sensible?

In lots of instances, a reader can simply acknowledge the place the narrative begins to shift—for instance, when the textual content strikes to a distinct scene, introduces a brand new entity, or modifications its goal. The issue is that the majority automated chunking strategies don’t take into account this semantic sign and as an alternative rely solely on floor construction. Because of this, they might produce segmentations that look cheap from a formatting perspective however break the underlying narrative coherence.

To make this concrete, take into account the brief passage beneath and determine the optimum chunking boundary!


LumberChunker: Phase 2 (Quiz)

1 Learn the passage


The LumberChunker Methodology

Within the instance above, Possibility C supplies probably the most coherent segmentation. The boundary aligns with the purpose the place the narrative turns into semantically unbiased from the previous context.

Our purpose is to make such a segmentation resolution sensible at scale. The problem is that human-quality boundary detection requires understanding narrative context, which is pricey to use throughout hundreds of paragraphs in long-form paperwork.

LumberChunker approaches this by treating segmentation as a boundary-finding downside: given a brief sequence of consecutive paragraphs, we ask a language mannequin to establish the earliest level the place the content material clearly shifts. This formulation permits segments to fluctuate in size whereas remaining aligned with the underlying narrative construction. In follow, LumberChunker consists of those steps:

1) Doc Paragraph Extraction

Cleanly break up the e-book into paragraphs and assign secure IDs (ID:1, ID:2, …). This preserves the doc’s pure discourse items and provides us protected candidate boundaries.

Instance: From a novel, we extract:

ID:1 “The morning solar filtered by the dusty home windows…”
ID:2 “She walked slowly to the door, hesitating…”
ID:3 “In the meantime, throughout city, Detective Morrison reviewed the case information…”
ID:4 “The earlier evening’s occasions had left him puzzled…”

Every paragraph will get a singular ID for monitoring boundaries.

2) IDs Grouping for LLM

Construct a gaggle G_i by appending paragraphs till the group’s size reaches a token price range θ. This supplies sufficient context for the mannequin to evaluate when a subject/scene really shifts.

Instance: With θ = 550 tokens, we construct, per instance:

G_1 = [ID:1, ID:2, ID:3, ID:4, ID:5, ID:6]

This window, by spanning a number of paragraphs, will increase the prospect that a minimum of one significant narrative shift is current throughout the context.

3) LLM Question

Immediate the mannequin with the paragraphs in G_i and ask it to return the first paragraph the place content material clearly modifications relative to what got here earlier than. Use that returned ID because the chunk boundary; begin the following group at that paragraph and repeat to the tip of the e-book.

Instance: Given G_1 = [p1, p2, p3, p4, p5, p6], the LLM responds: p3

Reply Extraction:
We extract p3 because the boundary. This creates:

  • Chunk 1: [p1, p2]
  • Subsequent group (G_2) begins at p3

GutenQA: A Benchmark for Lengthy-Type Narrative Retrieval

To guage our chunking method, we introduce GutenQA, a benchmark of 100 rigorously cleaned public-domain books paired with 3,000 needle-in-a-haystack sort of questions. This enables us to measure retrieval high quality immediately after which observe how higher retrieval results in extra correct solutions in a RAG system.

Related Articles

Latest Articles