Tuesday, February 17, 2026

Asynchronous Verified Semantic Caching for Tiered LLM Architectures


Giant language fashions (LLMs) now sit within the essential path of search, help, and agentic workflows, making semantic caching important for lowering inference price and latency. Manufacturing deployments sometimes use a tiered static-dynamic design: a static cache of curated, offline vetted responses mined from logs, backed by a dynamic cache populated on-line. In follow, each tiers are generally ruled by a single embedding similarity threshold, which induces a tough tradeoff: conservative thresholds miss protected reuse alternatives, whereas aggressive thresholds danger serving semantically incorrect responses. We introduce Krites, an asynchronous, LLM-judged caching coverage that expands static protection with out altering serving selections. On the essential path, Krites behaves precisely like a typical static threshold coverage. When the closest static neighbor of the immediate falls just under the static threshold, Krites asynchronously invokes an LLM choose to confirm whether or not the static response is appropriate for the brand new immediate. Permitted matches are promoted into the dynamic cache, permitting future repeats and paraphrases to reuse curated static solutions and increasing static attain over time. In trace-driven simulations on conversational and search workloads, Krites will increase the fraction of requests served with curated static solutions (direct static hits plus verified promotions) by as much as 3.9 instances for conversational visitors and search-style queries relative to tuned baselines, with unchanged essential path latency.

Related Articles

Latest Articles