Managing ModelOps workflows could be complicated and time-consuming. In the event you’ve struggled with organising venture templates on your information science workforce, that the earlier strategy utilizing AWS Service Catalog required configuring portfolios, merchandise, and managing complicated permissions—including vital administrative overhead earlier than your workforce might begin constructing machine studying (ML) pipelines.
Amazon SageMaker AI Tasks now affords a neater path: Amazon S3 primarily based templates. With this new functionality, you’ll be able to retailer AWS CloudFormation templates immediately in Amazon Easy Storage Service (Amazon S3) and handle their whole lifecycle utilizing acquainted S3 options resembling versioning, lifecycle insurance policies, and S3 Cross-Area replication. This implies you’ll be able to present your information science workforce with safe, version-controlled, automated venture templates with considerably much less overhead.
This submit explores how you should utilize Amazon S3-based templates to simplify ModelOps workflows, stroll by the important thing advantages in comparison with utilizing Service Catalog approaches, and demonstrates the way to create a customized ModelOps resolution that integrates with GitHub and GitHub Actions—giving your workforce one-click provisioning of a completely practical ML setting.
What’s Amazon SageMaker AI Tasks?
Groups can use Amazon SageMaker AI Tasks to create, share, and handle totally configured ModelOps tasks. Inside this structured setting, you’ll be able to arrange code, information, and experiments—facilitating collaboration and reproducibility.
Every venture can embrace steady integration and supply (CI/CD) pipelines, mannequin registries, deployment configurations, and different ModelOps elements, all managed inside SageMaker AI. Reusable templates assist standardize ModelOps practices by encoding finest practices for information processing, mannequin growth, coaching, deployment, and monitoring. The next are well-liked use-cases you’ll be able to orchestrate utilizing SageMaker AI Tasks:
- Automate ML workflows: Arrange CI/CD workflows that mechanically construct, check, and deploy ML fashions.
- Implement governance and compliance: Assist your tasks comply with organizational requirements for safety, networking, and useful resource tagging. Constant tagging practices facilitate correct value allocation throughout groups and tasks whereas streamlining safety audits.
- Speed up time-to-value:Â Present pre-configured environments so information scientists concentrate on ML issues, not infrastructure.
- Enhance collaboration:Â Set up constant venture constructions for simpler code sharing and reuse.
The next diagram reveals how SageMaker AI Tasks affords separate workflows for directors and ML engineers and information scientists. The place the admins create and handle the ML use-case templates and the ML engineers and information scientists eat the permitted templates in self-service trend.
What’s new: Amazon SageMaker AI S3-based venture templates
The newest replace to SageMaker AI Tasks introduces the power for directors to retailer and handle ML venture templates immediately in Amazon S3. S3-based templates are a simpler and extra versatile various to the beforehand required Service Catalog. With this enhancement, AWS CloudFormation templates could be versioned, secured, and effectively shared throughout groups utilizing the wealthy entry controls, lifecycle administration, and replication options offered by S3. Now, information science groups can launch new ModelOps tasks from these S3-backed templates immediately inside Amazon SageMaker Studio. This helps organizations keep consistency and compliance at scale with their inside requirements.
Whenever you retailer templates in Amazon S3, they grow to be out there in all AWS Areas the place SageMaker AI Tasks is supported. To share templates throughout AWS accounts, you should utilize S3 bucket insurance policies and cross-account entry controls. The flexibility to activate versioning in S3 offers a whole historical past of template adjustments, facilitating audits and rollbacks, whereas additionally supplying an immutable document of venture template evolution over time. In case your groups at present use Service Catalog-based templates, the S3-based strategy offers a simple migration path. When migrating from Service Catalog to S3, the first concerns contain provisioning new SageMaker roles to exchange Service Catalog-specific roles, updating template references accordingly, importing templates to S3 with correct tagging, and configuring domain-level tags to level to the template bucket location. For organizations utilizing centralized template repositories, cross-account S3 bucket insurance policies should be established to allow template discovery from client accounts, with every client account’s SageMaker area tagged to reference the central bucket. Each S3-based and Service Catalog templates are displayed in separate tabs inside the SageMaker AI Tasks creation interface, so organizations can introduce S3 templates regularly with out disrupting present workflows through the migration.
The S3-based ModelOps tasks assist customized CloudFormation templates that you simply create on your group ML use case. AWS-provided templates (such because the built-in ModelOps venture templates) proceed to be out there solely by Service Catalog. Your customized templates should be legitimate CloudFormation information in YAML format. To begin utilizing S3-based templates with SageMaker AI Tasks, your SageMaker area (the collaborative workspace on your ML groups) should embrace the tag sagemaker:projectS3TemplatesLocation with worth s3://. Every template file uploaded to S3 should be tagged with sagemaker:studio-visibility=true to look within the SageMaker AI Studio Tasks console. You have to to grant learn entry to SageMaker execution roles on the S3 bucket coverage and allow CORS onfiguration on the S3 bucket to permit SageMaker AI Tasks entry to the S3 templates.
The next diagram illustrates how S3-based templates combine with SageMaker AI Tasks to allow scalable ModelOps workflows. The setup operates in two separate workflows – one-time configuration by directors and venture launch by ML Engineers / Information Scientists. When ML Engineers / Information Scientists launch a brand new ModelOps venture in SageMaker AI, SageMaker AI launches an AWS CloudFormation stack to provision the assets outlined within the template and as soon as the method is full, you’ll be able to entry all specified assets and the configured CI/CD pipelines in your venture.

Managing the lifecycle of launched tasks could be achieved by the SageMaker Studio console the place customers can navigate to S3 Templates, choose a venture, and use the Actions dropdown menu to replace or delete tasks. Mission updates can be utilized to switch present template parameters or the template URL itself, triggering CloudFormation stack updates which might be validated earlier than execution, whereas venture deletion removes all related CloudFormation assets and configurations. These lifecycle operations will also be carried out programmatically utilizing the SageMaker APIs.
To display the ability of S3-based templates, let’s take a look at a real-world situation the place an admin workforce wants to offer information scientists with a standardized ModelOps workflow that integrates with their present GitHub repositories.
Use case: GitHub-integrated MLOps template for enterprise groups
Many organizations use GitHub as their main supply management system and need to use GitHub Actions for CI/CD whereas utilizing SageMaker for ML workloads. Nonetheless, organising this integration requires configuring a number of AWS companies, establishing safe connections, and implementing correct approval workflows—a posh job that may be time-consuming if performed manually. Our S3-based template solves this problem by provisioning a whole ModelOps pipeline that features, CI/CD orchestration, SageMaker Pipelines elements and event-drive automation. The next diagram illustrates the end-to-end workflow provisioned by this ModelOps template.

This pattern ModelOps venture with S3-based templates allows totally automated and ruled ModelOps workflows. Every ModelOps venture features a GitHub repository pre-configured with Actions workflows and safe AWS CodeConnections for seamless integration. Upon code commits, a SageMaker pipeline is triggered to orchestrate a standardized course of involving information preprocessing, mannequin coaching, analysis, and registration. For deployment, the system helps automated staging on mannequin approval, with sturdy validation checks, a handbook approval gate for selling fashions to manufacturing, and a safe, event-driven structure utilizing AWS Lambda and Amazon EventBridge. All through the workflow, governance is supported by SageMaker Mannequin Registry for monitoring mannequin variations and lineage, well-defined approval steps, safe credential administration utilizing AWS Secrets and techniques Supervisor, and constant tagging and naming requirements for all assets.
When information scientists choose this template from SageMaker Studio, they provision a completely practical ModelOps setting by a streamlined course of. They push their ML code to GitHub utilizing built-in Git performance inside the Studio built-in growth setting (IDE), and the pipeline mechanically handles mannequin coaching, analysis, and progressive deployment by staging to manufacturing—all whereas sustaining enterprise safety and compliance necessities. The whole setup directions together with the code for this ModelOps template is on the market in our GitHub repository.
After you comply with the directions within the repository you will discover the mlops-github-actions template within the SageMaker AI Tasks part within the SageMaker AI Studio console by selecting Tasks from the navigation pane and choosing the Group templates tab and selecting Subsequent, as proven within the following picture.

To launch the ModelOps venture, you will need to enter project-specific particulars together with the Position ARN area. This area ought to include the AmazonSageMakerProjectsLaunchRole ARN created throughout setup, as proven within the following picture.
As a safety finest observe, use the AmazonSageMakerProjectsLaunchRole Amazon Useful resource Identify (ARN), not your SageMaker execution function.
The AmazonSageMakerProjectsLaunchRole is a provisioning function that acts as an middleman through the ModelOps venture creation. This function comprises all of the permissions wanted to create your venture’s infrastructure, together with AWS Identification and Entry Administration (IAM) roles, S3 buckets, AWS CodePipeline, and different AWS assets. Through the use of this devoted launch function, ML engineers and information scientists can create ModelOps tasks with out requiring broader permissions in their very own accounts. Their private SageMaker execution function stays restricted in scope—they solely want permission to imagine the launch function itself.
This separation of tasks is vital for sustaining safety. With out launch roles, each ML practitioner would want in depth IAM permissions to create code pipelines, AWS CodeBuild tasks, S3 buckets, and different AWS assets immediately. With launch roles, they solely want permission to imagine a pre-configured function that handles the provisioning on their behalf, protecting their private permissions minimal and safe.

Enter your required venture configuration particulars and select Subsequent. The template will then create two automated ModelOps workflows—one for mannequin constructing and one for mannequin deployment—that work collectively to offer CI/CD on your ML fashions. The whole ModelOps instance could be discovered within the mlops-github-actions repository.

Clear up
After deployment, you’ll incur prices for the deployed assets. In the event you don’t intend to proceed utilizing the setup, delete the ModelOps venture assets to keep away from pointless costs.
To destroy the venture, open SageMaker Studio and select Extra within the navigation pane and choose Tasks. Select the venture you need to delete, select the vertical ellipsis above the upper-right nook of the tasks checklist and select Delete. Evaluation the data within the Delete venture dialog field and choose Sure, delete the venture to substantiate. After deletion, confirm that your venture now not seems within the tasks checklist.
Along with deleting a venture, which is able to take away and deprovision the SageMaker AI Mission, you additionally have to manually delete the next elements in the event that they’re now not wanted: Git repositories, pipelines, mannequin teams, and endpoints.
Conclusion
The Amazon S3-based template provisioning for Amazon SageMaker AI Tasks transforms how organizations standardize ML operations. As demonstrated on this submit, a single AWS CloudFormation template can provision a whole CI/CD workflow integrating your Git repository (GitHub, Bitbucket, or GitLab), SageMaker Pipelines, and SageMaker Mannequin Registry—offering information science groups with automated workflows whereas sustaining enterprise governance and safety controls. For extra details about SageMaker AI Tasks and S3-based templates, see ModelOps Automation With SageMaker Tasks.
By usging S3-based templates in SageMaker AI Tasks, directors can outline and govern the ML infrastructure, whereas ML engineers and information scientists acquire entry to pre-configured ML environments by self-service provisioning. Discover the GitHub samples repository for well-liked ModelOps templates and get began in the present day by following the offered directions. You too can create customized templates tailor-made to your group’s particular necessities, safety insurance policies, and most well-liked ML frameworks.
In regards to the authors
Christian Kamwangala is an AI/ML and Generative AI Specialist Options Architect at AWS, primarily based in Paris, France. He companions with enterprise clients to architect, optimize, and deploy production-grade AI options leveraging the excellent AWS machine studying stack . Christian focuses on inference optimization methods that stability efficiency, value, and latency necessities for large-scale deployments. In his spare time, Christian enjoys exploring nature and spending time with household and buddies
Sandeep Raveesh is a Generative AI Specialist Options Architect at AWS. He works with buyer by their AIOps journey throughout mannequin coaching, generative AI functions like brokers, and scaling generative AI use-cases. He additionally focuses on go-to-market methods serving to AWS construct and align merchandise to resolve trade challenges within the generative AI house. You possibly can join with Sandeep on LinkedIn to find out about generative AI options.
Paolo Di Francesco is a Senior Options Architect at Amazon Net Companies (AWS). He holds a PhD in Telecommunications Engineering and has expertise in software program engineering. He’s captivated with machine studying and is at present specializing in utilizing his expertise to assist clients attain their objectives on AWS, in discussions round MLOps. Outdoors of labor, he enjoys enjoying soccer and studying.
