Friday, November 7, 2025

How Amazon Search elevated ML coaching twofold utilizing AWS Batch for Amazon SageMaker Coaching jobs


On this submit, we present you ways Amazon Search optimized GPU occasion utilization by leveraging AWS Batch for SageMaker Coaching jobs. This managed answer enabled us to orchestrate machine studying (ML) coaching workloads on GPU-accelerated occasion households like P5, P4, and others. We can even present a step-by-step walkthrough of the use case implementation.

Machine studying at Amazon Search

At Amazon Search, we use tons of of GPU-accelerated situations to coach and consider ML fashions that assist our prospects uncover merchandise they love. Scientists sometimes prepare a couple of mannequin at a time to search out the optimum set of options, mannequin structure, and hyperparameter settings that optimize the mannequin’s efficiency. We beforehand leveraged a first-in-first-out (FIFO) queue to coordinate mannequin coaching and analysis jobs. Nonetheless, we would have liked to make use of a extra nuanced standards to prioritize which jobs ought to run in what order. Manufacturing fashions wanted to run with excessive precedence, exploratory analysis as medium precedence, and hyperparameter sweeps and batch inference as low precedence. We additionally wanted a system that might deal with interruptions. Ought to a job fail, or a given occasion sort change into saturated, we would have liked the job to run on different accessible appropriate occasion varieties whereas respecting the general prioritization standards. Lastly, we needed a managed answer so we might focus extra on mannequin growth as an alternative of managing infrastructure.

After evaluating a number of choices, we selected AWS Batch for Amazon SageMaker Coaching jobs as a result of it finest met our necessities. This answer seamlessly built-in AWS Batch with Amazon SageMaker and allowed us to run jobs per our prioritization standards. This permits utilized scientists to submit a number of concurrent jobs with out guide useful resource administration. By leveraging AWS Batch options equivalent to superior prioritization by way of fair-share scheduling, we elevated peak utilization of GPU-accelerated situations from 40% to over 80%.

Amazon Search: AWS Batch for SageMaker Coaching Job implementation

We leveraged three AWS applied sciences to arrange our job queue. We used Service Environments to configure the SageMaker AI parameters that AWS Batch makes use of to submit and handle SageMaker Coaching jobs. We used Share Identifiers to prioritize our workloads. Lastly, we used Amazon CloudWatch to observe and the availability of alerting functionality for essential occasions or deviations from anticipated habits. Let’s dive deep into these constructs.

Service environments. We arrange service environments to signify the whole GPU capability accessible for every occasion household, equivalent to P5s and P4s. Every service setting was configured with fastened limits primarily based on our group’s reserved capability in AWS Batch. Word that for groups utilizing SageMaker Coaching Plans, these limits will be set to the variety of reserved situations, making capability planning extra easy. By defining these boundaries, we established how the whole GPU occasion capability inside a service setting was distributed throughout completely different manufacturing jobs. Every manufacturing experiment was allotted a portion of this capability by way of Share Identifiers.

Determine 1 supplies a real-world instance of how we used AWS Batch’s fair-share scheduling to divide 100 GPU occasion between ShareIDs. We allotted 60 situations to ProdExp1, and 40 to ProdExp2. When ProdExp2 used solely 25 GPU situations, the remaining 15 may very well be borrowed by ProdExp1, permitting it to scale as much as 75 GPU situations. When ProdExp2 later wanted its full 40 GPU situations, the scheduler preempted jobs from ProdExp1 to revive stability. This instance used the P4 occasion household, however the identical strategy might apply to any SageMaker-supported EC2 occasion household. This ensured that manufacturing workloads have assured entry to their assigned capability, whereas exploratory or ad-hoc experiments might nonetheless make use of any idle GPU situations. This design safeguarded essential workloads and improved general occasion utilization by making certain that no reserved capability went unused.

Determine 1: AWS Batch fair-share scheduling

Share Identifiers. We used Share Identifiers to allocate fractions of a service setting’s capability to manufacturing experiments. Share Identifiers are string tags utilized at job submission time. AWS Batch used these tags to trace utilization and implement fair-share scheduling. For initiatives that required devoted capability, we outlined preset Share Identifiers with quotas in AWS Batch. This reserved capability for manufacturing tracks. These quotas acted as equity targets quite than laborious limits. Idle capability might nonetheless be borrowed, however underneath competition, AWS Batch enforced equity by preempting sources from overused identifiers and reassigned them to underused ones.

Inside every Share Identifier, job priorities starting from 0 to 99 decided execution order, however priority-based preemption solely triggered when the ShareIdentifier reached its allotted capability restrict. Determine 2 illustrates how we setup and used our share identifiers. ProdExp1 had 60 p4d situations and ran jobs at varied priorities. Job A had a precedence of 80, Job B was set to 50, Job C was set to at 30, and Job D had a precedence 10. When all 60 situations had been occupied and a brand new high-priority job (precedence 90) requiring 15 situations was submitted, the system preempted the bottom precedence operating job (Job D) to make room, whereas sustaining the whole of 60 situations for that Share Identifier.

Determine 2: Precedence scheduling inside a Share ID

Amazon CloudWatch. We used Amazon CloudWatch to instrument our SageMaker coaching jobs. SageMaker robotically publishes metrics on job progress and useful resource utilization, whereas AWS Batch supplies detailed data on job scheduling and execution. With AWS Batch, we queried the standing of every job by way of the AWS Batch APIs. This made it attainable to trace jobs as they transitioned by way of states equivalent to SUBMITTED, PENDING, RUNNABLE, STARTING, RUNNING, SUCCEEDED, and FAILED. We printed these metrics and job states to CloudWatch and configured dashboards and alarms to alert anytime we encountered prolonged wait occasions, sudden failures, or underutilized sources. This built-in integration offered each real-time visibility and historic pattern evaluation, which helped our group preserve operational effectivity throughout GPU clusters with out constructing customized monitoring methods.

Operational impression on group efficiency

By adopting AWS Batch for SageMaker Coaching jobs, we enabled experiments to run with out issues about useful resource availability or competition. Researchers might submit jobs with out ready for guide scheduling, which elevated the variety of experiments that may very well be run in parallel. This led to shorter queue occasions, greater GPU utilization, and sooner turnaround of coaching outcomes, straight enhancing each analysis throughput and supply timelines.

The way to set Amazon Batch for SageMaker Coaching Jobs

To arrange the same setting, you possibly can observe this tutorial, which exhibits you tips on how to orchestrate a number of GPU giant language mannequin (LLM) fine-tuning jobs utilizing a number of GPU-powered situations. The answer can be accessible on GitHub.

Stipulations

To orchestrate a number of SageMaker Coaching jobs with AWS Batch, first that you must full the next stipulations:

Clone the GitHub repository with the belongings for this deployment. This repository consists of notebooks that reference belongings:

git clone https://github.com/aws/amazon-sagemaker-examples/
cd  build_and_train_models/sm-training-queues-pytorch/

Create AWS Batch sources

To create the required sources to handle SageMaker Coaching job queues with AWS Batch, we offer utility capabilities within the instance to automate the creation of the Service Surroundings, Scheduling Coverage, and Job Queue.

The service setting represents the Amazon SageMaker AI capability limits accessible to schedule, expressed by most variety of situations. The scheduling coverage signifies how useful resource computes are allotted in a job queue between customers or workloads. The job queue is the scheduler interface that researchers work together with to submit jobs and interrogate job standing. AWS Batch supplies two completely different queues we will function with:

  1. FIFO queues – Queues during which no scheduling insurance policies are required
  2. Truthful-share queues – Queues during which a scheduling coverage Amazon Useful resource Identify (ARN) is required to orchestrate the submitted jobs

We advocate creating devoted service environments for every job queue in a 1:1 ratio. FIFO queues present primary message supply, whereas fair-share scheduling (FSS) queues present extra subtle scheduling, balancing utilization inside a Share Identifier, share weights, and job precedence. For purchasers who don’t want a number of shares however would love the flexibility to assign a precedence on job submission, we advocate creating an FSS queue and utilizing a single share inside it for all submissions.To create the sources, execute the next instructions:

cd smtj_batch_utils
python create_resources.py

You possibly can navigate the AWS Batch Dashboard, proven within the following screenshot, to discover the created sources.

This automation script created two queues:

  1. ml-c5-xlarge-queue – A FIFO queue with precedence 2 used for CPU workloads
  2. ml-g6-12xlarge-queue – A good-share queue with precedence 1 used for GPU workloads

The related scheduling coverage for the queue ml-g6-12xlarge-queue is with share attributes equivalent to Excessive precedence (HIGHPRI), Medium precedence (MIDPRI) and Low precedence (LOWPRI) together with the queue weights. Customers can submit jobs and assign them to one among three shares: HIGHPRI, MIDPRI, or LOWPRI and assign weights equivalent to 1 for prime precedence and three for medium and 5 for low precedence. Under is the screenshot exhibiting the scheduling coverage particulars:

For directions on tips on how to arrange the service setting and a job queue, discuss with the Getting began part in Introducing AWS Batch assist for SageMaker Coaching Jobs weblog.

Run LLM fine-tuning jobs on SageMaker AI

We run the pocket book pocket book.ipynb to start out submitting SageMaker Coaching jobs with AWS Batch. The pocket book comprises the code to arrange the information used for the workload, add on Amazon Easy Storage Service (Amazon S3), and outline the hyperparameters required by the job to be executed.

To run the fine-tuning workload utilizing SageMaker Coaching jobs, this instance makes use of the ModelTrainer class. The ModelTrainer class is a more moderen and extra intuitive strategy to mannequin coaching that considerably enhances consumer expertise. It helps distributed coaching, construct your personal container (BYOC), and recipes.

For extra details about ModelTrainer, you possibly can discuss with Speed up your ML lifecycle utilizing the brand new and improved Amazon SageMaker Python SDK – Half 1: ModelTrainer

To arrange the fine-tuning workload, full the next steps:

  1. Choose the occasion sort, the container picture for the coaching job, and outline the checkpoint path the place the mannequin will probably be saved:
    import sagemaker
    
    instance_type = "ml.g6.12xlarge"
    instance_count = 1
    
    image_uri = sagemaker.image_uris.retrieve(
        framework="pytorch",
        area=sagemaker_session.boto_session.region_name,
        model="2.6",
        instance_type=instance_type,
        image_scope="coaching"
    )

  2. Create the ModelTrainer perform to encapsulate the coaching setup. The ModelTrainer class simplifies the expertise by encapsulating code and coaching setup. On this instance:
    1. SourceCode – The supply code configuration. That is used to configure the supply code for operating the coaching job by utilizing your native python scripts.
    2. Compute – The compute configuration. That is used to specify the compute sources for the coaching job.
    from sagemaker.modules.configs import Compute, OutputDataConfig, SourceCode, StoppingCondition
    from sagemaker.modules.distributed import Torchrun
    from sagemaker.modules.prepare import ModelTrainer
    
    position = sagemaker.get_execution_role()
    
    # Outline the script to be run
    source_code = SourceCode(
        source_dir="./scripts",
        necessities="necessities.txt",
        entry_script="prepare.py",
    )
    
    # Outline the compute
    compute_configs = Compute(
        instance_type=instance_type,
        instance_count=instance_count,
        keep_alive_period_in_seconds=0
    )
    
    # outline Coaching Job Identify
    job_name = f"train-deepseek-distill-llama-8b-sft-batch"
    
    # outline OutputDataConfig path
    output_path = f"s3://{bucket_name}/{job_name}"
    
    # Outline the ModelTrainer
    model_trainer = ModelTrainer(
        training_image=image_uri,
        source_code=source_code,
        base_job_name=job_name,
        compute=compute_configs,
        distributed=Torchrun(),
        stopping_condition=StoppingCondition(max_runtime_in_seconds=7200),
        hyperparameters={
            "config": "/choose/ml/enter/information/config/args.yaml"
        },
        output_data_config=OutputDataConfig(s3_output_path=output_path),
        position=position,
    )

  3. Arrange the enter channels for ModelTrainer by creating InputData objects from the offered S3 bucket paths for the coaching and validation datasets:
    from sagemaker.modules.configs import InputData
    
    train_input = InputData(
        channel_name="prepare",
        data_source=train_dataset_s3_path,
    )
    val_input = InputData(
        channel_name="val",
        data_source=val_dataset_s3_path,
    )
    config_input = InputData(
        channel_name="config",
        data_source=train_config_s3_path,
    )
    
    TRAINING_INPUTS = [train_input, val_input, config_input]

Queue SageMaker Coaching jobs

This part and the next are meant for use interactively so as to discover tips on how to use the Amazon SageMaker Python SDK to submit jobs to your Batch queues. Observe these steps:

  1. Choose the queue to make use of:
    from sagemaker.aws_batch.queue import TrainingQueue
    SMTJ_BATCH_QUEUE = "ml-g6-12xlarge-queue"
    
    queue = TrainingQueue(SMTJ_BATCH_QUEUE)
    

  2. Within the subsequent cell, submit two coaching jobs within the queue:
    1. LOW PRIORITY
    2. MEDIUM PRIORITY
  3. Use the API submit to submit all the roles:
    job_name_1 = job_name + "-low-pri"
    queued_job_1 = queue.submit(
        model_trainer, TRAINING_INPUTS, job_name_1, precedence=5, share_identifier="LOWPRI"
    )
    job_name_2 = job_name + "-mid-pri"
    queued_job_2 = queue.submit(
        model_trainer, TRAINING_INPUTS, job_name_2, precedence=3, share_identifier="MIDPRI"
    )

Show the standing of operating and in queue jobs

We are able to use the job queue record and job queue snapshot APIs to programmatically view a snapshot of the roles that the queue will run subsequent. For fair-share queues, this ordering is dynamic and infrequently must be refreshed as a result of new jobs are submitted to the queue or as share utilization adjustments over time.

from utils.queue_utils import print_queue_state
print_queue_state(queue)

The next screenshot exhibits the roles submitted with low precedence and medium precedence within the Runnable State and within the queue.

You may also discuss with the AWS Batch Dashboard, proven within the following screenshot, to investigate the standing of the roles.

As proven within the following screenshot, the primary job executed with the SageMaker Coaching job is the MEDIUM PRIORITY one, by respecting the scheduling coverage guidelines outlined beforehand.

You possibly can discover the operating coaching job within the SageMaker AI console, as proven within the following screenshot.

Submit a further job

Now you can submit a further SageMaker Coaching job with HIGH PRIORITY to the queue:

job_name_3 = job_name + "-high-pri"
queued_job_3 = queue.submit(
    model_trainer, TRAINING_INPUTS, job_name_3, precedence=1, share_identifier="HIGHPRI"
)

You possibly can discover the standing from the dashboard, as proven within the following screenshot.

The HIGH PRIORITY job, regardless of being submitted later within the queue, will probably be executed earlier than the opposite runnable jobs by respecting the scheduling coverage guidelines, as proven within the following screenshot.

Because the scheduling coverage within the screenshot exhibits, the LOWPRI share has the next weight issue (5) than the MIDPRI share (3). Since a decrease weight signifies greater precedence, a LOWPRI job will probably be executed after a MIDPRI job, even when they’re submitted on the similar time.

Clear up

To scrub up your sources to keep away from incurring future expenses, observe these steps:

  1. Confirm that your coaching job isn’t operating anymore. To take action, in your SageMaker console, select Coaching and examine Coaching jobs.
  2. Delete AWS Batch sources by utilizing the command python create_resources.py --clean from the GitHub instance or by manually deleting them from the AWS Administration Console.

Conclusion

On this submit, we demonstrated how Amazon Search used AWS Batch for SageMaker Coaching Jobs to optimize GPU useful resource utilization and coaching job administration. The answer remodeled their coaching infrastructure by implementing subtle queue administration and fair proportion scheduling, rising peak GPU utilization from 40% to over 80%.We advocate that organizations going through comparable ML coaching infrastructure challenges discover AWS Batch integration with SageMaker, which supplies built-in queue administration capabilities and priority-based scheduling. The answer eliminates guide useful resource coordination whereas offering workloads with acceptable prioritization by way of configurable scheduling insurance policies.

To start implementing AWS Batch with SageMaker coaching jobs, you possibly can entry our pattern code and implementation information within the amazon-sagemaker-examples repository on GitHub. The instance demonstrates tips on how to arrange AWS Identification and Entry Administration (IAM) permissions, create AWS Batch sources, and orchestrate a number of GPU-powered coaching jobs utilizing ModelTrainer class.


The authors wish to thank Charles Thompson and Kanwaljit Khurmi for his or her collaboration.

Concerning the authors

Mona Mona

Mona Mona

Mona is a generative AI Specialist Options Architect at Amazon focusing. She is a broadcast writer of two books – Pure Language Processing with AWS AI Companies and Google Cloud Licensed Skilled Machine Studying Examine Information.

Mayank Jha

Mayank Jha

Mayank is a Senior Machine Studying Engineer at Amazon Search engaged on the mannequin coaching optimization. He’s enthusiastic about discovering sensible functions for complicated issues at hand and goals to develop options which have a deep impression on how companies and other people thrive.

Bruno Pistone

Bruno Pistone

Bruno is a Senior generative AI and ML Specialist Options Architect for AWS primarily based in Milan. He works with giant prospects serving to them to deeply perceive their technical wants and design AI and Machine Studying options that make one of the best use of the AWS Cloud and the Amazon Machine Studying stack. He enjoys spending time along with his associates and exploring new locations, in addition to travelling to new locations.

James Park

James Park

James is a Options Architect at Amazon Internet Companies. He works with Amazon.com to design, construct, and deploy expertise options on AWS, and has a selected curiosity in AI and machine studying. In his spare time he enjoys in search of out new cultures, new experiences, and staying updated with the newest expertise developments.

Related Articles

Latest Articles