At this time, we’re saying structured outputs on Amazon Bedrock—a functionality that basically transforms how one can get hold of validated JSON responses from basis fashions by means of constrained decoding for schema compliance.
This represents a paradigm shift in AI software growth. As an alternative of validating JSON responses and writing fallback logic for once they fail, you may transfer straight to constructing with the info. With structured outputs, you may construct zero-validation knowledge pipelines that belief mannequin outputs, dependable agentic programs that confidently name exterior features, and simplified software architectures with out retry logic.
On this publish, we discover the challenges of conventional JSON era and the way structured outputs solves them. We cowl the 2 core mechanisms—JSON Schema output format and strict instrument use—together with implementation particulars, greatest practices, and sensible code examples. Whether or not you’re constructing knowledge extraction pipelines, agentic workflows, or AI-powered APIs, you’ll discover ways to use structured outputs to create dependable, production-ready functions. Our companion Jupyter pocket book offers hands-on examples for each function lined right here.
The issue with conventional JSON era
For years, getting structured knowledge from language fashions meant crafting detailed prompts, hoping for the perfect, and constructing elaborate error-handling programs. Even with cautious prompting, builders routinely encounter:
- Parsing failures: Invalid JSON syntax that breaks
json.hundreds()calls - Lacking fields: Required knowledge factors absent from responses
- Kind mismatches: Strings the place integers are anticipated, breaking downstream processing
- Schema violations: Responses that technically parse however don’t match your knowledge mannequin
In manufacturing programs, these failures compound. A single malformed response can cascade by means of your pipeline, requiring retries that enhance latency and prices. For agentic workflows the place fashions name instruments, invalid parameters can break perform calls fully.
Think about a reserving system requiring passengers: int. With out schema enforcement, the mannequin would possibly return passengers: "two" or passengers: "2"—syntactically legitimate JSON, however semantically improper to your perform signature.
What modifications with structured outputs
Structured outputs on Amazon Bedrock isn’t incremental enchancment—it’s a basic shift from probabilistic to deterministic output formatting. Via constrained decoding, Amazon Bedrock constrains mannequin responses to evolve to your specified JSON schema. Two complementary mechanisms can be found:
| Characteristic | Function | Use case |
|---|---|---|
| JSON Schema output format | Management the mannequin’s response format | Information extraction, report era, API responses |
| Strict instrument use | Validate instrument parameters | Agentic workflows, perform calling, multi-step automation |
These options can be utilized independently or collectively, providing you with exact management over each what the mannequin outputs and the way it calls your features.
What structured outputs delivers:
- At all times legitimate: No extra
JSON.parse()errors or parsing exceptions - Kind secure: Subject varieties are enforced and required fields are at all times current
- Dependable: No retries wanted for schema violations
- Manufacturing prepared: Deploy with confidence at enterprise scale
How structured outputs works
Structured outputs makes use of constrained sampling with compiled grammar artifacts. Right here’s what occurs once you make a request:
- Schema validation: Amazon Bedrock validates your JSON schema in opposition to the supported JSON Schema Draft 2020-12 subset
- Grammar compilation: For brand new schemas, Amazon Bedrock compiles a grammar (first request would possibly take longer)
- Caching: Compiled grammars are cached for twenty-four hours, making subsequent requests sooner
- Constrained era: The mannequin generates tokens that produce legitimate JSON matching your schema
Efficiency concerns:
- First request latency: Preliminary compilation would possibly add latency to new schemas
- Cached efficiency: Subsequent requests with an identical schemas have minimal overhead
- Cache scope: Grammars are cached per account for twenty-four hours from first entry
Altering the JSON schema construction or a instrument’s enter schema invalidates the cache, however altering solely identify or description fields doesn’t.
Getting began with structured outputs
The next instance demonstrates structured outputs with the Converse API:
Output:
The response conforms to your schema—no further validation required.
Necessities and greatest practices
To make use of structured outputs successfully, comply with these pointers:
- Set
additionalProperties: falseon all objects. That is required for structured outputs to work. With out it, your schema received’t be accepted.
- Use descriptive area names and descriptions. Fashions use property names and descriptions to grasp what knowledge to extract. Clear names like
customer_emailoutperform generic names likefield1. - Use
enumfor constrained values. When a area has a restricted set of legitimate values, useenumto constrain choices. This improves accuracy and produces legitimate values. - Begin primary, then add complexity. Start with the minimal required fields and add complexity incrementally. Primary schemas compile sooner and are simpler to keep up.
- Reuse schemas to learn from caching. Construction your software to reuse schemas throughout requests. The 24-hour grammar cache considerably improves efficiency for repeated queries.
- Examine
stopReasonin each response. Two situations can produce non-conforming responses: refusals (when the mannequin declines for security causes) and token limits (whenmax_tokensis reached earlier than finishing). Deal with each instances in your code. - Take a look at with lifelike knowledge earlier than deployment. Validate your schemas in opposition to production-representative inputs. Edge instances in actual knowledge typically reveal schema design points.
Supported JSON Schema options:
- All primary varieties:
object,array,string,integer,quantity,boolean,null enum(strings, numbers, bools, or nulls solely)const,anyOf,allOf(with limitations)$ref,$def, anddefinitions(inside references solely)- String codecs:
date-time,time,date,period,e mail,hostname,uri,ipv4,ipv6,uuid - Array
minItems(solely values 0 and 1)
Not supported:
- Recursive schemas
- Exterior
$refreferences - Numerical constraints (
minimal,most,multipleOf) - String constraints (
minLength,maxLength) additionalPropertiesset to something aside fromfalse
Strict instrument use for agentic workflows
When constructing functions the place fashions name instruments, set strict: true in your instrument definition to constrain instrument parameters to match your enter schema precisely:
With strict: true, structured outputs constrains the output in order that:
- The
locationarea is at all times a string - The
unitarea is at all times bothcelsiusorfahrenheit - No sudden fields seem within the enter
Sensible functions throughout industries
The pocket book demonstrates use instances that span industries:
- Monetary providers: Extract structured knowledge from earnings experiences, mortgage functions, and compliance paperwork. With structured outputs, each required area is current and appropriately typed for downstream processing.
- Healthcare: Parse medical notes into structured, schema-compliant information. Extract affected person data, diagnoses, and therapy plans into validated JSON for EHR integration.
- Ecommerce: Construct dependable product catalog enrichment pipelines. Extract specs, classes, and attributes from product descriptions with constant, dependable outcomes.
- Authorized: Analyze contracts and extract key phrases, events, dates, and obligations into structured codecs appropriate for contract administration programs.
- Customer support: Construct clever ticket routing and response programs the place extracted intents, sentiments, and entities match your software’s knowledge mannequin.
Selecting the best method
Our testing revealed clear patterns for when to make use of every function:
Use JSON Schema output format when:
- You want the mannequin’s response in a particular construction
- Constructing knowledge extraction pipelines
- Producing API-ready responses
- Creating structured experiences or summaries
Use strict instrument use when:
- Constructing agentic programs that decision exterior features
- Implementing multi-step workflows with instrument chains
- Requiring validated parameter varieties for perform calls
- Connecting AI to databases, APIs, or exterior providers
Use each collectively when:
- Constructing complicated brokers that want validated instrument calls and structured closing responses
- Creating programs the place intermediate instrument outcomes feed into structured outputs
- Implementing enterprise workflows requiring end-to-end schema compliance
API comparability: Converse in comparison with InvokeModel
Each the Converse API and InvokeModel API help structured outputs, with barely totally different parameter codecs:
| Side | Converse API | InvokeModel (Anthropic Claude) | InvokeModel (open-weight fashions) |
|---|---|---|---|
| Schema location | outputConfig.textFormat |
output_config.format |
response_format |
| Software strict flag | toolSpec.strict |
instruments[].strict |
instruments[].perform.strict |
| Schema format | JSON string in jsonSchema.schema |
JSON object in schema |
JSON object in json_schema.schema |
| Greatest for | Conversational workflows | Single-turn inference (Claude) | Single-turn inference (open-weight) |
Be aware: The InvokeModel API makes use of totally different request area names relying on the mannequin kind. For Anthropic Claude fashions, use output_config.format for JSON schema outputs. For open-weight fashions, use response_format as a substitute.
Select the Converse API for multi-turn conversations and the InvokeModel API once you want direct mannequin entry with provider-specific request codecs.
Supported fashions and availability
Structured outputs is usually out there in all industrial AWS Areas for choose Amazon Bedrock mannequin suppliers:
- Anthropic
- DeepSeek
- MiniMax
- Mistral AI
- Moonshot AI
- NVIDIA
- OpenAI
- Qwen
The function works seamlessly with:
- Cross-Area inference: Use structured outputs throughout AWS Areas with out further setup
- Batch inference: Course of giant volumes with schema-compliant outputs
- Streaming: Stream structured responses with
ConverseStreamorInvokeModelWithResponseStream
Conclusion
On this publish, you found how structured outputs on Amazon Bedrock scale back the uncertainty of AI-generated JSON by means of validated, schema-compliant responses. By utilizing JSON Schema output format and strict instrument use, you may construct dependable knowledge extraction pipelines, sturdy agentic workflows, and production-ready AI functions—with out customized parsing or validation logic.Whether or not you’re extracting knowledge from paperwork, constructing clever automation, or creating AI-powered APIs, structured outputs ship the reliability your functions demand.
Structured outputs is now typically out there on Amazon Bedrock. To make use of structured outputs with the Converse APIs, replace to the newest AWS SDK. To be taught extra, see the Amazon Bedrock documentation and discover our pattern pocket book.
What workflows may validated, schema-compliant JSON unlock in your group? The pocket book offers all the pieces you might want to discover out.
In regards to the authors
