Wednesday, January 21, 2026

Introducing multimodal retrieval for Amazon Bedrock Data Bases


We’re excited to announce the final availability of multimodal retrieval for Amazon Bedrock Data Bases. This new functionality provides native help for video and audio content material, on high of textual content and pictures. With it you possibly can construct Retrieval Augmented Era (RAG) functions that may search and retrieve data throughout textual content, photos, audio, and video—all inside a completely managed service.

Fashionable enterprises retailer beneficial data in a number of codecs. Product documentation consists of diagrams and screenshots, coaching supplies include tutorial movies, and buyer insights are captured in recorded conferences. Till now, constructing synthetic intelligence (AI) functions that might successfully search throughout these content material varieties required complicated customized infrastructure and important engineering effort.

Beforehand, Bedrock Data Bases used text-based embedding fashions for retrieval. Whereas it supported textual content paperwork and pictures, photos needed to be processed utilizing basis fashions (FM) or Bedrock Information Automation to generate textual content descriptions—a text-first strategy that misplaced visible context and prevented visible search capabilities. Video and audio required customized preprocessing exterior pipelines. Now, with multimodal embeddings, the retriever natively helps textual content, photos, audio, and video inside a single embedding mannequin.

With multimodal retrieval in Bedrock Data Bases, now you can ingest, index, and retrieve data from textual content, photos, video, and audio utilizing a single, unified workflow. Content material is encoded utilizing multimodal embeddings that protect visible and audio context, enabling your functions to search out related data throughout media varieties. You may even search utilizing a picture to search out visually comparable content material or find particular scenes in movies.

On this submit, we’ll information you thru constructing multimodal RAG functions. You’ll find out how multimodal data bases work, how to decide on the appropriate processing technique primarily based in your content material kind, and the right way to configure and implement multimodal retrieval utilizing each the console and code examples.

Understanding multimodal data bases

Amazon Bedrock Data Bases automates the entire RAG workflow: ingesting content material out of your knowledge sources, parsing and chunking it into searchable segments, changing chunks to vector embeddings, and storing them in a vector database. Throughout retrieval, consumer queries are embedded and matched in opposition to saved vectors to search out semantically comparable content material, which augments the immediate despatched to your basis mannequin.

With multimodal retrieval, this workflow now handles photos, video, and audio alongside textual content by means of two processing approaches. Amazon Nova Multimodal Embeddings encodes content material natively right into a unified vector area, for cross-modal retrieval the place you possibly can question with textual content and retrieve movies, or search utilizing photos to search out visible content material.

Alternatively, Bedrock Information Automation converts multimedia into wealthy textual content descriptions and transcripts earlier than embedding, offering high-accuracy retrieval over spoken content material. Your selection is dependent upon whether or not visible context or speech precision issues most on your use case.

We discover every of those approaches on this submit.

Amazon Nova Multimodal Embeddings

Amazon Nova Multimodal Embeddings is the primary unified embedding mannequin that encodes textual content, paperwork, photos, video, and audio right into a single shared vector area. Content material is processed natively with out textual content conversion. The mannequin helps as much as 8,172 tokens for textual content and 30 seconds for video/audio segments, handles over 200 languages, and provides 4 embedding dimensions (with 3072-dimension as default, 1,024, 384, 256) to stability accuracy and effectivity. Bedrock Data Bases segments video and audio mechanically into configurable chunks (5-30 seconds), with every phase independently embedded.

For video content material, Nova embeddings seize visible parts—scenes, objects, movement, and actions—in addition to audio traits like music, sounds, and ambient noise. For movies the place spoken dialogue is essential to your use case, you should use Bedrock Information Automation to extract transcripts alongside visible descriptions. For standalone audio recordsdata, Nova processes acoustic options reminiscent of music, environmental sounds, and audio patterns. The cross-modal functionality allows use circumstances reminiscent of describing a visible scene in textual content to retrieve matching movies, add a reference picture to search out comparable merchandise, or find particular actions in footage—all with out pre-existing textual content descriptions.

Finest for: Product catalogs, visible search, manufacturing movies, sports activities footage, safety cameras, and situations the place visible content material drives the use case.

Amazon Bedrock Information Automation

Bedrock Information Automation takes a unique strategy by changing multimedia content material into wealthy textual representations earlier than embedding. For photos, it generates detailed descriptions together with objects, scenes, textual content inside photos, and spatial relationships. For video, it produces scene-by-scene summaries, identifies key visible parts, and extracts the on-screen textual content. For audio and video with speech, Bedrock Information Automation supplies correct transcriptions with timestamps and speaker identification, together with phase summaries that seize the important thing factors mentioned.

As soon as transformed to textual content, this content material is chunked and embedded utilizing textual content embedding fashions like Amazon Titan Textual content Embeddings or Amazon Nova Multimodal Embeddings. This text-first strategy allows extremely correct question-answering over spoken content material—when customers ask about particular statements made in a gathering or subjects mentioned in a podcast, the system searches by means of exact transcripts somewhat than audio embeddings. This makes it significantly beneficial for compliance situations the place you want actual quotes and verbatim information for audit trails, assembly evaluation, buyer help name mining, and use circumstances the place you should retrieve and confirm particular spoken data.

Finest for: Conferences, webinars, interviews, podcasts, coaching movies, help calls, and situations requiring exact retrieval of particular statements or discussions.

Use case state of affairs: Visible product seek for e-commerce

Multimodal data bases can be utilized for functions starting from enhanced buyer experiences and worker coaching to upkeep operations and authorized evaluation. Conventional e-commerce search depends on textual content queries, requiring clients to articulate what they’re searching for with the appropriate key phrases. This breaks down once they’ve seen a product elsewhere, have a photograph of one thing they like, or need to discover objects just like what seems in a video. Now, clients can search your product catalog utilizing textual content descriptions, add a picture of an merchandise they’ve photographed, or reference a scene from a video to search out matching merchandise. The system retrieves visually comparable objects by evaluating the embedded illustration of their question—whether or not textual content, picture, or video—in opposition to the multimodal embeddings of your product stock. For this state of affairs, Amazon Nova Multimodal Embeddings is the perfect selection. Product discovery is basically visible—clients care about colours, types, shapes, and visible particulars. By encoding your product photos and movies into the Nova unified vector area, the system matches primarily based on visible similarity with out counting on textual content descriptions that may miss delicate visible traits. Whereas an entire advice system would incorporate buyer preferences, buy historical past, and stock availability, retrieval from a multimodal data base supplies the foundational functionality: discovering visually related merchandise no matter how clients select to go looking.

Console walkthrough

Within the following part, we stroll by means of the high-level steps to arrange and check a multimodal data base for our e-commerce product search instance. We create a data base containing smartphone product photos and movies, then exhibit how clients can search utilizing textual content descriptions, uploaded photos, or video references. The GitHub repository supplies a guided pocket book that you would be able to observe to deploy this instance in your account.

Conditions

Earlier than you get began, just be sure you have the next stipulations:

Present the data base particulars and knowledge supply kind

Begin by opening the Amazon Bedrock console and creating a brand new data base. Present a descriptive title on your data base and choose your knowledge supply kind—on this case, Amazon S3 the place your product photos and movies are saved.

Configure knowledge supply

Join your S3 bucket containing product photos and movies. For the parsing technique, choose Amazon Bedrock default parser. Since we’re utilizing Nova Multimodal Embeddings, the photographs and movies are processed natively and embedded immediately into the unified vector area, preserving their visible traits with out conversion to textual content.

Configure knowledge storage and processing

Choose Amazon Nova Multimodal Embeddings as your embedding mannequin. This unified embedding mannequin encodes each your product photos and buyer queries into the identical vector area, enabling cross-modal retrieval the place textual content queries can retrieve photos and picture queries can discover visually comparable merchandise. For this instance, we use Amazon S3 Vectors because the vector retailer (you can optionally use different obtainable vector shops), which supplies cost-effective and sturdy storage optimized for large-scale vector knowledge units whereas sustaining sub-second question efficiency. You additionally have to configure the multimodal storage vacation spot by specifying an S3 location. Data Bases makes use of this location to retailer extracted photos and different media out of your knowledge supply. When customers question the data base, related media is retrieved from this storage.

Evaluate and create

Evaluate your configuration settings together with the data base particulars, knowledge supply configuration, embedding mannequin choice—we’re utilizing Amazon Nova Multimodal Embeddings v1 with 3072 vector dimensions (greater dimensions present richer representations; you should use decrease dimensions like 1,024, 384, or 256 to optimize for storage and price) —and vector retailer setup (Amazon S3 Vectors). As soon as every little thing seems right, create your data base.

Create an ingestion job

As soon as created, provoke the sync course of to ingest your product catalog. The data base processes every picture and video, generates embeddings and shops them within the managed vector database. Monitor the sync standing to substantiate the paperwork are efficiently listed.

Take a look at the data base utilizing textual content as enter in your immediate

Along with your data base prepared, check it utilizing a textual content question within the console. Search with product descriptions like “A metallic cellphone cowl” (or something equal that could possibly be related on your merchandise media) to confirm that text-based retrieval works accurately throughout your catalog.

Take a look at the data base utilizing a reference picture and retrieve totally different modalities

Now for the highly effective half—visible search. Add a reference picture of a product you need to discover. For instance, think about you noticed a cellular phone cowl on one other web site and need to discover comparable objects in your catalog. Merely add the picture with out further textual content immediate.

The multimodal data base extracts visible options out of your uploaded picture and retrieves visually comparable merchandise out of your catalog. As you possibly can see within the outcomes, the system returns cellphone covers with comparable design patterns, colours, or visible traits. Discover the metadata related to every chunk within the Supply particulars panel. The x-amz-bedrock-kb-chunk-start-time-in-millis and x-amz-bedrock-kb-chunk-end-time-in-millis fields point out the precise temporal location of this phase inside the supply video. When constructing functions programmatically, you should use these timestamps to extract and show the precise video phase that matched the question, enabling options like “soar to related second” or clip technology immediately out of your supply movies. This cross-modal functionality transforms the procuring expertise—clients now not want to explain what they’re searching for with phrases; they’ll present you.

Take a look at the data base utilizing a reference picture and retrieve totally different modalities utilizing Bedrock Information Automation

Now we have a look at what the outcomes would appear like if you happen to configured Bedrock Information Automation parsing throughout the knowledge supply setup. Within the following screenshot, discover the transcript part within the Supply particulars panel.

For every retrieved video chunk, Bedrock Information Automation mechanically generates an in depth textual content description—on this instance, describing the smartphone’s metallic rose gold end, studio lighting, and visible traits. This transcript seems immediately within the check window alongside the video, offering wealthy textual context. You get each visible similarities matching from the multimodal embeddings and detailed product descriptions that may reply particular questions on options, colours, supplies, and different attributes seen within the video.

Clear-up

To scrub up your assets, full the next steps, beginning with deleting the data base:

  1. On the Amazon Bedrock console, select Data Bases
  2. Choose your Data Base and be aware each the IAM service function title and S3 Vector index ARN
  3. Select Delete and make sure

To delete the S3 Vector as a vector retailer, use the next AWS Command Line Interface (AWS CLI) instructions:

aws s3vectors delete-index --vector-bucket-name YOUR_VECTOR_BUCKET_NAME --index-name YOUR_INDEX_NAME --region YOUR_REGION
aws s3vectors delete-vector-bucket --vector-bucket-name YOUR_VECTOR_BUCKET_NAME --region YOUR_REGION

  1. On the IAM console, discover the function famous earlier
  2. Choose and delete the function

To delete the pattern dataset:

  1. On the Amazon S3 console, discover your S3 bucket
  2. Choose and delete the recordsdata you uploaded for this tutorial

Conclusion

Multimodal retrieval for Amazon Bedrock Data Bases removes the complexity of constructing RAG functions that span textual content, photos, video, and audio. With native help for video and audio content material, now you can construct complete data bases that unlock insights out of your enterprise knowledge—not simply textual content paperwork.

The selection between Amazon Nova Multimodal Embeddings and Bedrock Information Automation offers you flexibility to optimize on your particular content material. The Nova unified vector area allows cross-modal retrieval for visual-driven use circumstances, whereas the Bedrock Information Automation text-first strategy delivers exact transcription-based retrieval for speech-heavy content material. Each approaches combine seamlessly into the identical absolutely managed workflow, assuaging the necessity for customized preprocessing pipelines.

Availability

Area availability relies on the options chosen for multimodal help, please seek advice from the documentation for particulars.

Subsequent steps

Get began with multimodal retrieval immediately:

  1. Discover the documentation: Evaluate the Amazon Bedrock Data Bases documentation and Amazon Nova Person Information for added technical particulars.
  2. Experiment with code examples: Try the Amazon Bedrock samples repository for hands-on notebooks demonstrating multimodal retrieval.
  3. Be taught extra about Nova: Learn the Amazon Nova Multimodal Embeddings announcement for deeper technical insights.

In regards to the authors

Dani Mitchell is a Generative AI Specialist Options Architect at Amazon Net Companies (AWS). He’s targeted on serving to speed up enterprises the world over on their generative AI journeys with Amazon Bedrock and Bedrock AgentCore.

Pallavi NargundPallavi Nargund is a Principal Options Architect at AWS. She is a generative AI lead for US Greenfield and leads the AWS for Authorized Tech staff. She is keen about ladies in know-how and is a core member of Ladies in AI/ML at Amazon. She speaks at inside and exterior conferences reminiscent of AWS re:Invent, AWS Summits, and webinars. Pallavi holds a Bachelor’s of Engineering from the College of Pune, India. She lives in Edison, New Jersey, along with her husband, two ladies, and her two pups.

Jean-Pierre Dodel is a Principal Product Supervisor for Amazon Bedrock, Amazon Kendra, and Amazon Fast Index. He brings 15 years of Enterprise Search and AI/ML expertise to the staff, with prior work at Autonomy, HP, and search startups earlier than becoming a member of Amazon 8 years in the past. JP is at the moment specializing in improvements for multimodal RAG, agentic retrieval, and structured RAG.

Related Articles

Latest Articles