Sunday, February 8, 2026

Structured outputs on Amazon Bedrock: Schema-compliant AI responses


At this time, we’re saying structured outputs on Amazon Bedrock—a functionality that basically transforms how one can get hold of validated JSON responses from basis fashions by means of constrained decoding for schema compliance.

This represents a paradigm shift in AI software growth. As an alternative of validating JSON responses and writing fallback logic for once they fail, you may transfer straight to constructing with the info. With structured outputs, you may construct zero-validation knowledge pipelines that belief mannequin outputs, dependable agentic programs that confidently name exterior features, and simplified software architectures with out retry logic.

On this publish, we discover the challenges of conventional JSON era and the way structured outputs solves them. We cowl the 2 core mechanisms—JSON Schema output format and strict instrument use—together with implementation particulars, greatest practices, and sensible code examples. Whether or not you’re constructing knowledge extraction pipelines, agentic workflows, or AI-powered APIs, you’ll discover ways to use structured outputs to create dependable, production-ready functions. Our companion Jupyter pocket book offers hands-on examples for each function lined right here.

The issue with conventional JSON era

For years, getting structured knowledge from language fashions meant crafting detailed prompts, hoping for the perfect, and constructing elaborate error-handling programs. Even with cautious prompting, builders routinely encounter:

  • Parsing failures: Invalid JSON syntax that breaks json.hundreds() calls
  • Lacking fields: Required knowledge factors absent from responses
  • Kind mismatches: Strings the place integers are anticipated, breaking downstream processing
  • Schema violations: Responses that technically parse however don’t match your knowledge mannequin

In manufacturing programs, these failures compound. A single malformed response can cascade by means of your pipeline, requiring retries that enhance latency and prices. For agentic workflows the place fashions name instruments, invalid parameters can break perform calls fully.

Think about a reserving system requiring passengers: int. With out schema enforcement, the mannequin would possibly return passengers: "two" or passengers: "2"—syntactically legitimate JSON, however semantically improper to your perform signature.

What modifications with structured outputs

Structured outputs on Amazon Bedrock isn’t incremental enchancment—it’s a basic shift from probabilistic to deterministic output formatting. Via constrained decoding, Amazon Bedrock constrains mannequin responses to evolve to your specified JSON schema. Two complementary mechanisms can be found:

Characteristic Function Use case
JSON Schema output format Management the mannequin’s response format Information extraction, report era, API responses
Strict instrument use Validate instrument parameters Agentic workflows, perform calling, multi-step automation

These options can be utilized independently or collectively, providing you with exact management over each what the mannequin outputs and the way it calls your features.

What structured outputs delivers:

  • At all times legitimate: No extra JSON.parse() errors or parsing exceptions
  • Kind secure: Subject varieties are enforced and required fields are at all times current
  • Dependable: No retries wanted for schema violations
  • Manufacturing prepared: Deploy with confidence at enterprise scale

How structured outputs works

Structured outputs makes use of constrained sampling with compiled grammar artifacts. Right here’s what occurs once you make a request:

  1. Schema validation: Amazon Bedrock validates your JSON schema in opposition to the supported JSON Schema Draft 2020-12 subset
  2. Grammar compilation: For brand new schemas, Amazon Bedrock compiles a grammar (first request would possibly take longer)
  3. Caching: Compiled grammars are cached for twenty-four hours, making subsequent requests sooner
  4. Constrained era: The mannequin generates tokens that produce legitimate JSON matching your schema

Efficiency concerns:

  • First request latency: Preliminary compilation would possibly add latency to new schemas
  • Cached efficiency: Subsequent requests with an identical schemas have minimal overhead
  • Cache scope: Grammars are cached per account for twenty-four hours from first entry

Altering the JSON schema construction or a instrument’s enter schema invalidates the cache, however altering solely identify or description fields doesn’t.

Getting began with structured outputs

The next instance demonstrates structured outputs with the Converse API:

import boto3
import json
# Initialize the Bedrock Runtime shopper
bedrock_runtime = boto3.shopper(
    service_name="bedrock-runtime",
    region_name="us-east-1"  # Select your most well-liked area
)
# Outline your JSON schema
extraction_schema = {
    "kind": "object",
    "properties": {
        "identify": {"kind": "string", "description": "Buyer identify"},
        "e mail": {"kind": "string", "description": "Buyer e mail handle"},
        "plan_interest": {"kind": "string", "description": "Product plan of curiosity"},
        "demo_requested": {"kind": "boolean", "description": "Whether or not a demo was requested"}
    },
    "required": ["name", "email", "plan_interest", "demo_requested"],
    "additionalProperties": False
}
# Make the request with structured outputs
response = bedrock_runtime.converse(
    modelId="us.anthropic.claude-opus-4-5-20251101-v1:0",
    messages=[
        {
            "role": "user",
            "content": [
                {
                    "text": "Extract the key information from this email: John Smith (john@example.com) is interested in our Enterprise plan and wants to schedule a demo for next Tuesday at 2pm."
                }
            ]
        }
    ],
    inferenceConfig={
        "maxTokens": 1024
    },
    outputConfig={
        "textFormat": {
            "kind": "json_schema",
            "construction": {
                "jsonSchema": {
                    "schema": json.dumps(extraction_schema),
                    "identify": "lead_extraction",
                    "description": "Extract lead data from buyer emails"
                }
            }
        }
    }
)
# Parse the schema-compliant JSON response
end result = json.hundreds(response["output"]["message"]["content"][0]["text"])
print(json.dumps(end result, indent=2))

Output:

{
  "identify": "John Smith",
  "e mail": "john@instance.com",
  "plan_interest": "Enterprise",
  "demo_requested": true
}

The response conforms to your schema—no further validation required.

Necessities and greatest practices

To make use of structured outputs successfully, comply with these pointers:

  • Set additionalProperties: false on all objects. That is required for structured outputs to work. With out it, your schema received’t be accepted.
{
  "kind": "object",
  "properties": {
    "identify": {"kind": "string"}
  },
  "required": ["name"],
  "additionalProperties": false
}

  • Use descriptive area names and descriptions. Fashions use property names and descriptions to grasp what knowledge to extract. Clear names like customer_email outperform generic names like field1.
  • Use enum for constrained values. When a area has a restricted set of legitimate values, use enum to constrain choices. This improves accuracy and produces legitimate values.
  • Begin primary, then add complexity. Start with the minimal required fields and add complexity incrementally. Primary schemas compile sooner and are simpler to keep up.
  • Reuse schemas to learn from caching. Construction your software to reuse schemas throughout requests. The 24-hour grammar cache considerably improves efficiency for repeated queries.
  • Examine stopReason in each response. Two situations can produce non-conforming responses: refusals (when the mannequin declines for security causes) and token limits (when max_tokens is reached earlier than finishing). Deal with each instances in your code.
  • Take a look at with lifelike knowledge earlier than deployment. Validate your schemas in opposition to production-representative inputs. Edge instances in actual knowledge typically reveal schema design points.

Supported JSON Schema options:

  • All primary varieties: objectarraystringintegerquantitybooleannull
  • enum (strings, numbers, bools, or nulls solely)
  • constanyOfallOf (with limitations)
  • $ref$def, and definitions (inside references solely)
  • String codecs: date-timetimedateperiode mailhostnameuriipv4ipv6uuid
  • Array minItems (solely values 0 and 1)

Not supported:

  • Recursive schemas
  • Exterior $ref references
  • Numerical constraints (minimalmostmultipleOf)
  • String constraints (minLengthmaxLength)
  • additionalProperties set to something aside from false

Strict instrument use for agentic workflows

When constructing functions the place fashions name instruments, set strict: true in your instrument definition to constrain instrument parameters to match your enter schema precisely:

import boto3
import json
bedrock_runtime = boto3.shopper('bedrock-runtime', region_name="us-east-1")
response = bedrock_runtime.converse(
    modelId="us.anthropic.claude-opus-4-5-20251101-v1:0",
    messages=[
        {
            "role": "user",
            "content": [{"text": "What's the weather like in San Francisco?"}]
        }
    ],
    inferenceConfig={"maxTokens": 1024},
    toolConfig={
        "instruments": [
            {
                "toolSpec": {
                    "name": "get_weather",
                    "description": "Get the current weather for a specified location",
                    "strict": True,  # Enable strict mode
                    "inputSchema": {
                        "json": {
                            "type": "object",
                            "properties": {
                                "location": {
                                    "type": "string",
                                    "description": "The city and state, e.g., San Francisco, CA"
                                },
                                "unit": {
                                    "type": "string",
                                    "enum": ["celsius", "fahrenheit"],
                                    "description": "Temperature unit"
                                }
                            },
                            "required": ["location", "unit"],
                            "additionalProperties": False
                        }
                    }
                }
            }
        ]
    }
)
# Software inputs conform to the schema
for content_block in response["output"]["message"]["content"]:
    if "toolUse" in content_block:
        tool_input = content_block["toolUse"]["input"]
        print(f"Software: {content_block['toolUse']['name']}")
        print(f"Enter: {json.dumps(tool_input, indent=2)}")

With strict: true, structured outputs constrains the output in order that:

  • The location area is at all times a string
  • The unit area is at all times both celsius or fahrenheit
  • No sudden fields seem within the enter

Sensible functions throughout industries

The pocket book demonstrates use instances that span industries:

  • Monetary providers: Extract structured knowledge from earnings experiences, mortgage functions, and compliance paperwork. With structured outputs, each required area is current and appropriately typed for downstream processing.
  • Healthcare: Parse medical notes into structured, schema-compliant information. Extract affected person data, diagnoses, and therapy plans into validated JSON for EHR integration.
  • Ecommerce: Construct dependable product catalog enrichment pipelines. Extract specs, classes, and attributes from product descriptions with constant, dependable outcomes.
  • Authorized: Analyze contracts and extract key phrases, events, dates, and obligations into structured codecs appropriate for contract administration programs.
  • Customer support: Construct clever ticket routing and response programs the place extracted intents, sentiments, and entities match your software’s knowledge mannequin.

Selecting the best method

Our testing revealed clear patterns for when to make use of every function:

Use JSON Schema output format when:

  • You want the mannequin’s response in a particular construction
  • Constructing knowledge extraction pipelines
  • Producing API-ready responses
  • Creating structured experiences or summaries

Use strict instrument use when:

  • Constructing agentic programs that decision exterior features
  • Implementing multi-step workflows with instrument chains
  • Requiring validated parameter varieties for perform calls
  • Connecting AI to databases, APIs, or exterior providers

Use each collectively when:

  • Constructing complicated brokers that want validated instrument calls and structured closing responses
  • Creating programs the place intermediate instrument outcomes feed into structured outputs
  • Implementing enterprise workflows requiring end-to-end schema compliance

API comparability: Converse in comparison with InvokeModel

Each the Converse API and InvokeModel API help structured outputs, with barely totally different parameter codecs:

Side Converse API InvokeModel (Anthropic Claude) InvokeModel (open-weight fashions)
Schema location outputConfig.textFormat output_config.format response_format
Software strict flag toolSpec.strict instruments[].strict instruments[].perform.strict
Schema format JSON string in jsonSchema.schema JSON object in schema JSON object in json_schema.schema
Greatest for Conversational workflows Single-turn inference (Claude) Single-turn inference (open-weight)

Be aware: The InvokeModel API makes use of totally different request area names relying on the mannequin kind. For Anthropic Claude fashions, use output_config.format for JSON schema outputs. For open-weight fashions, use response_format as a substitute.

Select the Converse API for multi-turn conversations and the InvokeModel API once you want direct mannequin entry with provider-specific request codecs.

Supported fashions and availability

Structured outputs is usually out there in all industrial AWS Areas for choose Amazon Bedrock mannequin suppliers:

  • Anthropic
  • DeepSeek
  • Google
  • MiniMax
  • Mistral AI
  • Moonshot AI
  • NVIDIA
  • OpenAI
  • Qwen

The function works seamlessly with:

  • Cross-Area inference: Use structured outputs throughout AWS Areas with out further setup
  • Batch inference: Course of giant volumes with schema-compliant outputs
  • Streaming: Stream structured responses with ConverseStream or InvokeModelWithResponseStream

Conclusion

On this publish, you found how structured outputs on Amazon Bedrock scale back the uncertainty of AI-generated JSON by means of validated, schema-compliant responses. By utilizing JSON Schema output format and strict instrument use, you may construct dependable knowledge extraction pipelines, sturdy agentic workflows, and production-ready AI functions—with out customized parsing or validation logic.Whether or not you’re extracting knowledge from paperwork, constructing clever automation, or creating AI-powered APIs, structured outputs ship the reliability your functions demand.

Structured outputs is now typically out there on Amazon Bedrock. To make use of structured outputs with the Converse APIs, replace to the newest AWS SDK. To be taught extra, see the Amazon Bedrock documentation and discover our pattern pocket book.

What workflows may validated, schema-compliant JSON unlock in your group? The pocket book offers all the pieces you might want to discover out.


In regards to the authors

Jeffrey Zeng

Jeffrey Zeng is a Worldwide Specialist Options Architect for Generative AI at AWS, main third-party fashions on Amazon Bedrock. He focuses on agentic coding and workflows, with hands-on expertise serving to prospects construct and deploy AI options from proof-of-concept to manufacturing.

Jonathan Evans

Jonathan Evans is a Worldwide Options Architect for Generative AI at AWS, the place he helps prospects leverage cutting-edge AI applied sciences with Anthropic Claude fashions on Amazon Bedrock, to resolve complicated enterprise challenges. With a background in AI/ML engineering and hands-on expertise supporting machine studying workflows within the cloud, Jonathan is keen about making superior AI accessible and impactful for organizations of all sizes.

Related Articles

Latest Articles