Thursday, March 12, 2026
Home Blog Page 152

SpaceX launches 1st Starlink satellites of 2026 on new Falcon 9 rocket

0


The primary Starlink satellites to hitch SpaceX’s megaconstellation in 2026 have been launched in the present day (Jan. 4) on a model new Falcon 9 rocket.

AWS AI League: Mannequin customization and agentic showdown

0


Constructing clever brokers to deal with complicated, real-world duties might be daunting. Moreover, slightly than relying solely on massive, pre-trained basis fashions, organizations typically have to fine-tune and customise smaller, extra specialised fashions to outperform them for his or her particular use circumstances. The AWS AI League offers an progressive program to assist enterprises overcome the challenges of constructing superior AI capabilities by way of thrilling competitions that drive innovation in agentic AI and mannequin customization.

In 2025, the primary AWS AI League competitors captured the eye of builders, knowledge scientists, and enterprise leaders globally. They got here collectively to resolve urgent issues utilizing the newest AI instruments and strategies. The grand finale at AWS re:Invent 2025 was an thrilling showcase of their ingenuity and abilities. Cross-functional groups from main organizations competed head-to-head, demonstrating their means to craft efficient prompts, fine-tune fashions, and construct highly effective AI brokers.

Congratulations to our 2025 AWS AI League Champions! After intense competitors amongst these three distinctive builders emerged victorious, sharing a $25,000 prize pool:

  • 1st Place: Hemanth Vediyera from Cisco
  • 2nd Place: Ross Williams from Aqfer
  • third Place: Deepesh Khanna from Capital One

Determine 1: Left to proper: Ross, Hemanth, Deepesh

This publish explores how the AWS AI League program can be utilized to host AI competitions that may assist members expertise mannequin customization and agent constructing ideas, apply these to deal with real-world enterprise challenges, and showcase their progressive options by way of partaking, game-style codecs. We spotlight the brand new agentic AI and mannequin customization challenges, the place enterprises can apply to host inner tournaments utilizing AWS credit, and builders can compete at AWS occasions.

To get began, go to the AWS AI League product web page.

What’s the AWS AI League Championship?

The AWS AI League expertise begins with a hands-on, 2-hour workshop led by AWS consultants, adopted by self-paced experimentation. The journey culminates in a fascinating, gameshow-style grand finale, the place you showcase your AI creations and options to handle urgent enterprise challenges. The next determine exhibits these three steps.

Figure 2: AWS AI League Championship steps

Determine 2: AWS AI League Championship steps

Constructing on the success of the 2025 program, we’re excited to announce the launch of the AWS AI League 2026 Championship. This yr, the competitors options two new challenges that enable members to essentially put their AI abilities to the take a look at:

  1. The agentic AI Problem lets you construct clever brokers utilizing Amazon Bedrock AgentCore. Rivals craft custom-made agent architectures to deal with real-world enterprise issues.
  2. Complementing the agentic AI Problem, the mannequin customization Problem now makes use of the newest fine-tuning recipes in SageMaker Studio. Right here you customise fashions for particular use circumstances.

For the 2026 AI League championship, the prize pool doubles to $50,000, with tracks catering to builders at completely different talent ranges – from learners to superior practitioners.

Construct clever brokers with the agentic AI problem

The AWS AI League now options an thrilling agentic AI problem, the place you construct clever brokers utilizing Amazon Bedrock AgentCore to resolve complicated issues in a dynamic, game-style competitors. On this problem, brokers navigate by way of a maze-like grid atmosphere, encountering numerous challenges whereas looking for a treasure chest. These challenges map to real-world use circumstances, testing the brokers’ means to deal with inappropriate content material, execute code, use a browser, and extra.

Brokers have a time restrict to traverse the map, acquire factors, and overcome the obstacles earlier than reaching the treasure chest. The extra factors they earn, the upper they rank on the leaderboard. You’ll be able to totally customise your brokers utilizing Amazon Bedrock AgentCore primitives, which lets you extra securely scale and handle production-grade brokers. You too can choose particular fashions for supervisor and sub-agents, in addition to create customized instruments akin to Bedrock Guardrails, AgentCore Reminiscence, and AWS Lambda capabilities to assist your brokers navigate the challenges. The next determine depicts the obstacles the agent should overcome whereas touring to succeed in the treasure chest.

Figure 3: AWS AI League Agentic Challenge

Determine 3: AWS AI League Agentic Problem

AWS AI League offers a full consumer interface (UI) for customers to construct their clever agent options. You need to use this no-code UI to assemble multi-agent architectures and instruments, integrating numerous elements akin to Amazon SageMaker Studio CodeEditor for interactive coding of customized Lambda capabilities and instruments. This lets you totally develop and customise your agent-based options inside the AWS AI League web site, with no need to go away the atmosphere.

The next screenshots showcase the agent constructing expertise all inside the AWS AI League web site.

Figure 4: AWS AI League agent tools

Determine 4: AWS AI League agent instruments

Figure 5: AWS AI League multi agent architecture

Determine 5: AWS AI League multi agent structure

All through the competitors, customers obtain real-time agent efficiency suggestions, with a big language mannequin (LLM) evaluator offering evaluation to assist with iteration. The next picture showcases how the agent is evaluated throughout challenges.

Figure 6: AWS AI League agent challenge evaluation

Determine 6: AWS AI League agent problem analysis

On the grand finale, the highest finalists take the stage to showcase their brokers’ capabilities in a reside, game-show format, demonstrating the ability and flexibility of agentic AI in fixing complicated, multi-step issues. The analysis standards embody time effectivity, accuracy in fixing challenges, agent planning, and token consumption effectivity. The next snapshot exhibits the ultimate spherical of the Grand Finale at re:Invent 2025.

Figure 7: AWS AI League re:Invent 2025 Grand Finale

Determine 7: AWS AI League re:Invent 2025 Grand Finale

Customise fashions to outperform bigger fashions

AWS AI League is increasing the scope of its mannequin customization problem, permitting you to make use of the newest developments in fine-tuning strategies.

You’ll be able to entry the brand new mannequin customization expertise inside Amazon SageMaker Studio, the place you need to use highly effective new coaching recipes. The purpose is to develop extremely efficient, domain-specific fashions that may outperform the efficiency of bigger, reference fashions.

The problem begins with you honing in in your mannequin customization abilities. Utilizing the instruments and strategies you’ve gotten realized, you apply superior fine-tuning strategies to assist improve your mannequin’s efficiency. After your fashions are custom-made, the true take a look at begins. The fashions are submitted to a leaderboard for efficiency evaluation in opposition to a reference mannequin. The mannequin earns factors every time the automated decide deems your custom-made mannequin’s response to be extra correct and complete than the reference mannequin’s output. You’ll be able to showcase your superior abilities, rise to the highest of the leaderboard, and probably unlock new alternatives on your organizations.

In the course of the problem, you obtain real-time suggestions in your mannequin’s efficiency from an automatic evaluator once you undergo the leaderboard. The leaderboard evaluates submissions in opposition to a reference dataset all through the competitors, offering fast suggestions on accuracy that can assist you iterate and enhance your options. The next picture showcases how an AI critique is used to judge the custom-made mannequin.

Figure 8: AWS AI League model customization evaluation

Determine 8: AWS AI League mannequin customization analysis

On the grand finale, the highest finalists exhibit their fashions’ capabilities in a reside, game-show format, showcasing their immediate engineering talents. In the course of the gameshow, the scoring consists of professional analysis the place area consultants and a reside viewers take part in real-time voting to find out which AI options greatest remedy actual enterprise challenges. The next picture showcases the participant immediate engineering view throughout a Grand Finale.

Figure 9: AWS AI League model customization Grand Finale participant view

Determine 9: AWS AI League mannequin customization Grand Finale participant view

Conclusion

On this publish, we explored the brand new AWS AI League challenges and the way they’re remodeling how organizations strategy AI growth. At AWS, we’ve realized that the quickest method to spark innovation is thru competitors. With AWS AI League, builders can now showcase their AI abilities, compete and unlock innovation.

To be taught extra about internet hosting an AWS AI League inside your group go to the AWS AI League and to dive deeper into constructing clever brokers and customizing AI fashions discover AWS AI coaching catalog on AWS Talent Builder.


In regards to the authors

Marc KarpMarc Karp is an ML Architect with the Amazon SageMaker Service crew. He focuses on serving to prospects design, deploy, and handle ML workloads at scale. In his spare time, he enjoys touring and exploring new locations.

Natasya Okay. Idries is the Product Advertising and marketing Supervisor for AWS AI/ML Gamified Studying Applications. She is enthusiastic about democratizing AI/ML abilities by way of partaking and hands-on academic initiatives that bridge the hole between superior know-how and sensible enterprise implementation. Her experience in constructing studying communities and driving digital innovation continues to form her strategy to creating impactful AI teaching programs. Outdoors of labor, Natasya enjoys touring, cooking Southeast Asian cuisines and exploring nature trails.

6 science milestones turning 40 this 12 months

0


It was a 12 months that noticed roughly six million People maintain fingers in a steady (roughly) line throughout the nation to boost cash for homelessness. A information anchor named Oprah Winfrey debuted her new speak present. In London, a musical primarily based on Gaston Leroux’s 1909 Gothic horror novel The Phantom of the Opera took its first steps on the trail to turning into the longest-running musical in Broadway historical past. In the meantime, Prime Gun and Ferris Bueller battled for movie show field workplace supremacy, whereas Madonna, Bon Jovi, and Whitney Houston dominated the radio. 

But it surely was additionally a 12 months of scientific and technological improvements, milestones and benchmarks that might actually change the way in which we reside, talk and see the world (and past). 

This was 1986. And these are the moments that actually modified all the things.

1. The House Race Continued

The search to ship a manned mission to the moon started within the Sixties, however the race to go additional and keep longer nonetheless had loads of momentum effectively into the Eighties. The US, the Soviet Union, and Japan would all make extraordinary leaps ahead in 1986, beginning with NASA’s Voyager 2. Initially launched in 1977, Voyager 2 would turn out to be the primary human-made object to fly previous Uranus in 1986, on its strategy to being the one spacecraft to review all 4 of the photo voltaic system’s large planets–Jupiter, Saturn, Neptune and Uranus–at shut vary. 

Within the January 1986 challenge of In style Science, author Jim Schefter puzzled if Voyager 2 might clear up the mysteries of Uranus. Picture: In style Science Archive

The identical 12 months, the Soviet Union started in-orbit building of a modular house station referred to as Mir (after the Russian phrase for “world,” which may additionally imply “neighborhood”). It could be the longest-lasting, most elaborate house station up to now, and would home a complete of 105 cosmonauts from 11 totally different nationalities together with Russian, French, Austrian, German, and British scientists over the following years. Its purpose was to higher perceive the challenges and obstacles in the way in which of everlasting house residing. 

Lastly, 1986 additionally noticed the Japanese spacecraft Suisei, carrying a UV imaging system and photo voltaic wind devices, get shut sufficient to the 12 months’s different large popular culture phenomenon–Halley’s Comet–to make among the first vital findings of the mysterious galactic phenomenon that had returned after 76 years. Suisei was in a position to doc Halley’s rotation via ultraviolet imaging, measurement of variations in its water-discharge fee, and observations of ions originating from the comet being captured by the earth magnetosphere.

2. Voyages Nearer to Dwelling

Again on Earth, much less galactically inclined plane had been making huge strides of their very own. Simply because the 12 months was coming to an finish on December 23, 1986, a airplane referred to as the Rutan Voyager (named after one of many pilots and the airplane’s designer, brothers Dick and Burt Rutan) accomplished the primary nonstop, non-refueled flight around the globe. 

It accomplished the duty in simply 9 days, with Dick Rutan and co-pilot Jeana Yeager (curiously, not associated to the opposite well-known aviator, Chuck Yeager) on the helm. Not solely was the mission spectacular, it helped show the sturdiness and usefulness of light-weight composite supplies (carbon fiber, epoxy) that might go on to assist gasoline developments in all the things from luxurious vehicles to sports activities gear. 

airplane on cover of magazine
In 1984, we questioned if the Rutan Voyager might circle the globe with out refueling. In 1986, it did simply that. Picture: In style Science Archives

3. The Delivery of the Laptop computer

On April 2, 1986, IBM debuted the IBM PC Convertible, the primary commercially obtainable laptop computer. 

Though weighing in at a chunky 13 kilos and about as conveniently moveable as a small suitcase, the PC Convertible nonetheless marked a big step within the evolution of the house pc. What used to take up a whole room had turn out to be one thing that would match in your desk and now was one thing you may take with you comparatively simply. 

story for briefcase computer
IBM’s “briefcase pc” seen within the July 1986 challenge of In style Science. Picture: In style Science Archives

Simply two years prior, the primary commercially obtainable handheld moveable cellphone had been launched and whereas it, just like the PC Convertible, didn’t seriously change the way in which we work and talk in a single day, they laid the groundwork for the laptops and smartphones we will’t reside with out in the present day. 

4. Silver Display screen Science

Not solely had been there loads of blockbusters cementing their legacies as cultural touchstones and enduring cult favorites in 1986, however some had been additionally slowly pushing the envelope in ways in which would fully change the artwork kind without end.  And one in all them was Jim Henson’s Labyrinth. Sure, Labyrinth, the PG-rated 1986 fantasy oddity that famously paired goblin puppets with rock star David Bowie (carrying questionably PG-13 jodhpurs). 

Diana, Princess of Wales, Royal Command Premiere of Labyrinth, London, With Ludo and Jim Henson, 1st December 1986. (Photo by John Shelley Collection/Avalon/Getty Images)
Princess Diana on the London premiere of ‘Labyrinth,’ with Ludo and Jim Henson on December 1, 1986. Picture: John Shelley Assortment/Avalon/Getty Photographs John Shelley Assortment/Avalon

It seems, the white owl that flies over the opening credit was the primary use of a sensible computer-generated animal. It wasn’t fairly Jurassic Park but, however the seeds had been sown. 

In the meantime, on the Canada Pavillion at Expo ’86–the world’s truthful hosted in Vancouver, British Columbia–audiences sat for a specially-created movie referred to as Transitions. What they didn’t know on the time was that this was the primary ever full-color 3D IMAX film. Between this and Labyrinth, you may say that the trail of Avatar began in 1986. 

Oh, and filmmaker George Lucas was getting divorced. This isn’t science, however he was three years previous his triumphant Star Wars: Return of the Jedi and the monetary pressure from ending his marriage coupled with the field workplace bombing of his 1986 movie Howard the Duck (an MCU movie earlier than it was cool…or profitable) led him to unload the pc animation division of Lucasfilm. The client? Steve Jobs. The corporate would turn out to be referred to as Pixar and its first quick movie, Luxo Jr. (starring a sentient desk lamp that is still a part of Pixar’s emblem to today), can be the primary CGI movie nominated for an Academy Award and would set Pixar on the trail to revolutionizing animated motion pictures. 

5. Gaming Builds Its Foundations

man sits on floor playing video game
Nintendo Famicom dwelling online game launched in April 1986 in Japan. Picture: Kurita KAKU/Gamma-Rapho by way of Getty Photographs Kurita KAKU

The primitive types of dwelling online game methods had begun laying some groundwork within the late ‘70s, however the broad U.S. launch of the Nintendo Leisure System in 1986 (after being launched in take a look at markets a 12 months earlier) took all the things to the following stage. 

Whereas this technique was revolutionary from a {hardware} perspective, it was 1986 that noticed the early phases of true online game fandom, and the acceptance of video video games as an leisure entity alongside motion pictures, tv, and music in methods they actually hadn’t been earlier than. Sure, Pac-Man was a popular culture phenomenon, with breakfast cereals and even a novelty hit track, however the arrival of the NES and, extra importantly, the debuts of video games like The Legend of Zelda, Metroid and Dragon Quest in 1986 launched video games with extra sprawling play kinds and deep lore. These weren’t the identical repetitive video games meant to maintain you pumping quarters into an arcade machine, they had been an introduction to what would ultimately develop into RPGs and open-world gaming experiences. 

6. Progress Is Not With out Its Challenges

Sadly, not each world-changing second in 1986 was a constructive one. Science noticed two of its biggest disasters strike that 12 months, and their resonance remains to be being felt. 

The primary occurred on January 28, 1986, when tens of millions watched reside on TV as NASA’s Challenger house shuttle exploded seconds after launch. Greater than only a shuttle launch, Challenger held the eye of the world as a result of one in all its passengers was a civilian schoolteacher named Christa McAuliffe. Whereas the world mourned, NASA investigated the tragic accident and never solely found technical explanations that might change their processes and gear–it turned out the rubber seals on the rocket boosters deteriorated within the excessive chilly temperatures of that day–however it could additionally revamp its security and accountability protocols to make sure future profitable launches. 

House Shuttle Challenger explosion (1986)

A number of months later, in April 1986, a sudden energy surge throughout a reactor methods take a look at brought about a meltdown of the Chernobyl nuclear plant in Ukraine (then a part of the Soviet Union). The fallout impacted huge areas of Ukraine, Russia and Belarus, and a 30 km (roughly 18 mile) space across the plant remains to be uninhabited. 

Within the aftermath, organizations just like the IAEA (Worldwide Atomic Power Company) labored to establish the plant’s weaknesses and improved and upgraded the design security of associated reactors. There was additionally vital work finished to intensify the deal with operational security and regulatory oversight, enhancing shutdown mechanisms and rising common security consciousness amongst nuclear reactor workers.

 

Outdoor gift guide content widget

2025 PopSci Out of doors Reward Information

 

Optimizing Knowledge Switch in AI/ML Workloads

0


a , a deep studying mannequin is executed on a devoted GPU accelerator utilizing enter information batches it receives from a CPU host. Ideally, the GPU — the costlier useful resource — must be maximally utilized, with minimal intervals of idle time. Particularly, which means each time it completes its execution on a batch, the next batch will likely be “ripe and prepared” for processing. When this doesn’t occur, the GPU idles whereas ready for enter information — a standard efficiency bottleneck sometimes called GPU hunger.

In earlier posts, (e.g., see A Caching Technique for Figuring out Bottlenecks on the Knowledge Enter Pipeline), we mentioned widespread causes of this problem, together with: inefficient storage retrieval, CPU useful resource exhaustion, and host-to-device switch bottlenecks. On this submit, we zoom in on information switch bottlenecks and revisit their identification and determination — this time with the assistance of NVIDIA Nsight™ Programs (nsys), a efficiency profiler designed for analyzing the system-wide exercise of workloads operating on NVIDIA GPUs.

NVIDIA Nsight vs. PyTorch Profiler

Readers acquainted with our work could also be stunned on the point out of NVIDIA Nsight profiler moderately than PyTorch Profiler. In our earlier posts we now have advocated strongly for the usage of PyTorch Profiler in AI/ML mannequin improvement as a software for figuring out and optimizing runtime efficiency. Again and again, we now have demonstrated its utility to all kinds of efficiency points. Its use doesn’t require any particular installations and might be run with out particular OS permissions. NVIDIA Nsight profiler, however, requires a devoted system setup (or a devoted NVIDIA container) and — for a few of its options — elevated permissions, making its use much less accessible and extra difficult than PyTorch Profiler.

The 2 profilers differ of their focus: PyTorch profiler is a framework profiler tightly coupled with PyTorch and closely centered on how fashions use the PyTorch software program stack and supporting libraries. NVIDIA Nsight profiler is a system-level profiler; it doesn’t know the small print of the mannequin being run or which framework is getting used, however moderately how the elements of the whole system are getting used and utilized. Whereas PyTorch Profiler excels at tracing the low-level operations of a PyTorch mannequin execution, nsys offers an in depth view of the actions of the whole system (GPU {hardware}, CUDA streams, OS interrupts, Community, PCIe, and so forth.). For a lot of efficiency points PyTorch profiler is adequate for figuring out and fixing the supply of the bottleneck; However some conditions name for nsys profiler, the “massive weapons”, for deriving deeper insights into the internal workings of the underlying system.

On this submit we intend to reveal a few of the distinctive capabilities of nsys profiler and their utility to the widespread data-transfer bottleneck.

Define

To facilitate our dialogue we are going to outline a toy ML workload with a data-transfer efficiency bottleneck and proceed to introduce various successive optimizations in an try to resolve it. All through the method, we are going to use the nsys profiler with the intention to analyze the system efficiency and assess the impression of the code modifications.

Setup

We are going to run our experiments on an Amazon EC2 g6e.2xlarge occasion with an NVIDIA L40S GPU operating an AWS Deep Studying (Ubuntu 24.04) AMI with PyTorch (2.8). To put in the nsys-cli profiler (model 2025.6.1) we comply with the official NVIDIA pointers:

wget https://developer.nvidia.com/downloads/property/instruments/safe/nsight-systems/2025_6/NsightSystems-linux-cli-public-2025.6.1.190-3689520.deb
sudo apt set up ./NsightSystems-linux-cli-public-2025.6.1.190-3689520.deb

The NVIDIA Instruments Extension (NVTX) library permits us to annotate our code with human-readable labels to extend the readability and comprehension of the efficiency hint. Whereas PyTorch affords built-in NVTX assist through its torch.cuda.nvtx APIs, we are going to use the standalone nvtx package deal (model 0.2.14) which helps color-coding the hint timeline for higher visible evaluation:

pip set up nvtx

Disclaimers

The code we are going to share is meant for demonstrative functions; please don’t depend on its correctness or optimality. Please don’t interpret our use of any library, software, or platform, as an endorsement of its use. The impression of the optimizations we are going to cowl can range tremendously primarily based on the small print of the mannequin and the runtime surroundings. Please make sure to assess their impact by yourself use case earlier than integrating their use.

Many because of Yitzhak Levi and Gilad Wasserman for his or her contributions to this submit.

A Toy PyTorch Mannequin

We introduce a coaching script deliberately designed to include a bottleneck on the data-input pipeline.

Within the code block under we outline a easy picture classification mannequin with a ResNet-18 spine.

import time, torch, torchvision

DEVICE = "cuda"
mannequin = torchvision.fashions.resnet18().to(DEVICE).practice()
optimizer = torch.optim.Adam(mannequin.parameters())

Subsequent, we outline an artificial dataset which we are going to use to coach our toy mannequin.

from torch.utils.information import Dataset, DataLoader

WARMUP_STEPS = 10
PROFILE_STEPS = 3
COOLDOWN_STEPS = 1
TOTAL_STEPS = WARMUP_STEPS + PROFILE_STEPS + COOLDOWN_STEPS
BATCH_SIZE = 64
TOTAL_SAMPLES = TOTAL_STEPS * BATCH_SIZE
IMG_SIZE = 512

# An artificial Dataset with random photographs and labels
class FakeDataset(Dataset):

    def __len__(self):
        return TOTAL_SAMPLES

    def __getitem__(self, index):
        img = torch.randn((3, IMG_SIZE, IMG_SIZE))
        label = torch.tensor(index % 10)
        return img, label

train_loader = DataLoader(
    FakeDataset(),
    batch_size=BATCH_SIZE
)

Lastly, we outline a normal coaching step programmed to run nsys-profiler for 3 steps utilizing the torch.cuda.profiler.begin and cease instructions — supposed to be used along with the nsys cli. We spotlight the elements of the coaching step utilizing the nvtx.annotate utility. Please check with the official documentation for extra particulars on profiling with nsys in PyTorch.

import nvtx
from torch.cuda import profiler

def copy_data(batch):
    information, targets = batch
    data_gpu = information.to(DEVICE)
    targets_gpu = targets.to(DEVICE)
    return data_gpu, targets_gpu


def compute_step(mannequin, batch, optimizer):
    information, targets = batch
    output = mannequin(information)
    loss = torch.nn.practical.cross_entropy(output, targets)
    loss.backward()
    optimizer.step()
    optimizer.zero_grad()
    return loss


data_iter = iter(train_loader)

for i in vary(TOTAL_STEPS):

    if i == WARMUP_STEPS:
        # begin nsys profiler
        torch.cuda.synchronize()
        start_time = time.perf_counter()
        profiler.begin()
    elif i == WARMUP_STEPS + PROFILE_STEPS:
        # cease nsys profiler
        torch.cuda.synchronize()
        profiler.cease()
        end_time = time.perf_counter()

    with nvtx.annotate(f"Batch {i}", colour="blue"):
        with nvtx.annotate("get batch", colour="purple"):
            batch = subsequent(data_iter)
        with nvtx.annotate("copy batch", colour="yellow"):
            batch = copy_data(batch)
        with nvtx.annotate("Compute", colour="inexperienced"):
            compute_step(mannequin, batch, optimizer)

total_time = end_time - start_time
throughput = PROFILE_STEPS / total_time
print(f"Throughput: {throughput:.2f} steps/sec")

We run our script utilizing the cudaProfilerApi possibility to begin and cease the profiler programmatically. Please see the official documentation for full particulars on profiling from the nsys cli.

nsys profile 
  --capture-range=cudaProfilerApi 
  --trace=cuda,nvtx,osrt 
  --output=baseline 
  python practice.py

This leads to a baseline.nsys-rep hint file that we copy over to our improvement machine for evaluation.

With the intention to draw a comparability to PyTorch profiler, we outline an alternate coaching loop programmed with PyTorch Profiler and annotated with the torch.profiler.record_function utility:

from torch.profiler import (
    profile, record_function, schedule, tensorboard_trace_handler
)

with profile(
    schedule=schedule(wait=0, warmup=WARMUP_STEPS, 
                      energetic=PROFILE_STEPS, repeat=1),
    on_trace_ready=tensorboard_trace_handler('./baseline'),
    record_shapes=True,
    with_stack=True
) as prof:
    for i in vary(TOTAL_STEPS):
        with record_function("get batch"):
            batch = subsequent(data_iter)
        with record_function("copy batch"):
            batch = copy_data(batch)
        with record_function("compute"):
            compute_step(mannequin, batch, optimizer)
        prof.step()

The throughput of our baseline experiment is 2.97 steps-per-second. Within the subsequent sections we are going to use the profile traces to establish efficiency bottlenecks in our coaching step and attempt to enhance on this end result.

Baseline Efficiency Evaluation

To research the resultant nsys hint file, we open it within the Nsight Programs GUI utility. Within the picture under we zoom in on the timeline of two of the coaching steps captured by the profiler:

Baseline Nsight Programs Profiler Hint (by Writer)

The hint comprises a wealth of data, only a subset of which we are going to contact on on this submit. Please see the nsys documentation for extra functionalities and options.

The timeline is split into two components: the CUDA part which experiences GPU exercise and the threads part which experiences the CPU exercise. The CUDA part makes a transparent distinction between the GPU kernel (compute) exercise (90.9%) and reminiscence exercise (9.1%). The highest bars in every part report the utilization of every of the assets and each sections embrace an NVTX part with the coloured annotations we included in our coaching step. We observe the next observations:

  1. The GPU is idle for roughly 50% of every coaching step. This may be seen by the portion of time taken by every batch (in blue) within the GPU NVTX bar and the big blocks of whitespace in between them.
  2. The GPU exercise for every batch begins instantly after the “get batch” exercise has accomplished on the CPU. It begins with the host-to-device reminiscence copy, marked in mild inexperienced and continues with the kernel computations, marked in mild blue.
  3. As soon as the CPU has launched the GPU reminiscence and compute instructions for batch N, it proceeds to the following batch within the coaching loop — resulting in a partial overlap of batch N+1 on the CPU with batch N on the GPU.
  4. The overwhelming majority of the CPU thread is spent on the “get batch” exercise. This constitutes the first bottleneck in our baseline experiment.

The profiling hint factors to a transparent offender — the dataloader. By default, PyTorch performs single course of information loading — a single CPU course of is used to load the following information enter batch, copy it to the GPU, and launch the compute kernels — all in a sequential method. This sometimes leads to extreme under-utilization of the CPU assets by: 1) limiting dataloading to only a single course of, and a pair of) making the loading of the following batch contingent on the completion of the CPU processing (i.e., kernel loading) of the earlier batch. Our irresponsible use of our CPU assets has resulted in our GPU being starved for enter information.

The identical conclusion might have been reached utilizing PyTorch Profiler hint proven under:

Baseline PyTorch Profiler Hint (by Writer)

Right here too, we will see lengthy intervals of GPU underutilization which are brought on by the lengthy “get batch” blocks on the CPU facet.

Optimization 1: Multi-Course of Knowledge Loading

Step one is to switch the info enter pipeline to make use of multi-process information loading. We set the variety of staff to match the 8 vCPUs obtainable on our Amazon EC2 g6e.2xlarge occasion. In a real-world state of affairs, this worth must be tuned for optimum throughput:

NUM_WORKERS = 8

train_loader = DataLoader(
    FakeDataset(),
    batch_size=BATCH_SIZE,
    num_workers=NUM_WORKERS
)

Following this variation our throughput jumps to 4.81 steps per second — a 62% enchancment over our baseline end result. The corresponding nsys profiler hint is proven under:

Multiproc Dataloading Nsight Programs Profiler Timeline (by Writer)

Observe that the purple “get batch” section has change into only a tiny sliver of every step within the NVTX bar. As an alternative, the yellow “copy batch” block now takes middle stage. Because of our use of multi-process dataloading, there may be now at all times a brand new batch prepared for processing — however can we do higher?

Taking a better have a look at the GPU part we see that there’s nonetheless a good portion (~290 milliseconds) of idle time in between the reminiscence operation and the kernel compute. This idle time is completely aligned with an “munmap” operation within the OS runtime bar. The “munmap” block is a CPU-side reminiscence cleanup operation carried out simply after the CUDA reminiscence copy is full. It happens on the tail-end of the lengthy yellow “copy batch” operation. The compute kernels are launched onto the GPU solely after the reminiscence cleanup has accomplished. It is a clear sample of synchronous host-to-device reminiscence copy: The CPU can’t proceed with kernel loading till the info copy operation has been totally accomplished and the GPU stays idle till the CPU masses the kernels.

The PyTorch profiler hint exhibits the identical GPU idle time but it surely doesn’t present the identical “munmap” trace. That is our first instance of the benefit of the system-wide visibility of the nsys profiler.

Multiproc Dataloading PyTorch Profiler Hint (by Writer)

With our discovering of the data-copy efficiency bottleneck in hand, we proceed to our subsequent optimization.

Optimization 2: Asynchronous Knowledge Switch

The answer to the bottleneck we now have discovered is to program our coaching step to load information asynchronously. This allows the CPU to launch the compute kernels instantly after sending the reminiscence copy command — with out ready for the reminiscence copy to be accomplished. This manner the GPU can start processing the kernels as quickly because the CUDA reminiscence copy is finished. Enabling asynchronous information copy requires two adjustments: First we should program the dataloader to make use of pinned reminiscence (as an alternative of pageable reminiscence), and second, we should go non_blocking=True argument to the to() operations:

NUM_WORKERS = 8
ASYNC_DATATRANSFER = True


train_loader = DataLoader(
    FakeDataset(),
    batch_size=BATCH_SIZE,
    num_workers=NUM_WORKERS,
    pin_memory=ASYNC_DATATRANSFER
)

def copy_data(batch):
    information, targets = batch
    data_gpu = information.to(DEVICE, non_blocking=ASYNC_DATATRANSFER)
    targets_gpu = targets.to(DEVICE, non_blocking=ASYNC_DATATRANSFER)
    return data_gpu, targets_gpu

Utilizing asynchronous dataloading leads to a throughput of 5.91 steps per second — a further 23% enchancment and 99% enchancment total. The resultant profiling hint is proven under:

Async Dataloading Nsight Programs Profiler Timeline (by Writer)

We now see the entire CPU operations bunched collectively at the start of the hint. We’ve eliminated all efficiency obstacles on the CPU facet permitting it to freely load the info and kernels to the GPU. Within the GPU part, we see steady exercise with none idle time. We do, nevertheless, see a transparent separation between CUDA reminiscence actions (in mild inexperienced) and CUDA kernel actions (in mild blue). PyTorch profiler, in distinction, doesn’t make this distinction clear. That is one other benefit of the hardware-centric profiler and, within the case of our toy experiment, is what informs the following steps of our optimization.

Async Dataloading PyTorch Profiler Hint (by Writer)

Optimization 3: Pipelining With CUDA Streams

Our ultimate optimizations derive from the truth that trendy GPUs, such because the NVIDIA L40S, use impartial engines for copying reminiscence (the DMA) and executing compute kernels (the SMs). We are able to reap the benefits of this by parallelizing the distinct reminiscence and kernel actions we noticed within the nsys profiler hint. We are going to program this by means of the usage of CUDA streams.

In a earlier submit, we expanded on the chance for optimizing AI/ML workloads utilizing CUDA Streams. Right here, we apply an analogous pipelining technique: We outline two distinct “copy” and “compute” CUDA streams and program the “copy” stream to repeat batch N+1 on the similar time that the “compute” stream is processing batch N:

# outline two CUDA streams
compute_stream = torch.cuda.Stream()
copy_stream = torch.cuda.Stream()


# extract first batch
next_batch = subsequent(data_iter)
with torch.cuda.stream(copy_stream):
    next_batch = copy_data(next_batch)

for i in vary(TOTAL_STEPS):

    if i == WARMUP_STEPS:
        torch.cuda.synchronize()
        start_time = time.perf_counter()
        profiler.begin()
    elif i == WARMUP_STEPS + PROFILE_STEPS:
        torch.cuda.synchronize()
        profiler.cease()
        end_time = time.perf_counter()

    with nvtx.annotate(f"Batch {i}", colour="blue"):
        # look forward to copy stream to finish copy of batch N
        compute_stream.wait_stream(copy_stream)
        batch = next_batch

        # execute mannequin on batch N+1 compute stream
        attempt:
            with nvtx.annotate("get batch", colour="purple"):
                next_batch = subsequent(data_iter)
            with torch.cuda.stream(copy_stream):
                with nvtx.annotate("copy batch", colour="yellow"):
                    next_batch = copy_data(next_batch)
        besides:
            # reached finish of dataset
            next_batch = None

        # execute mannequin on batch N compute stream
        with torch.cuda.stream(compute_stream):
            with nvtx.annotate("Compute", colour="inexperienced"):
                compute_step(mannequin, batch, optimizer)

total_time = end_time - start_time
throughput = PROFILE_STEPS / total_time
print(f"Throughput: {throughput:.2f} steps/sec")

This optimization leads to a throughput of 6.44 steps per second — a 9% enchancment over our earlier experiment. We observe that the impression of this optimization is capped by the length of the longer of the 2 operation sorts. In our earlier profile hint, the reminiscence block took 15.5 milliseconds and the kernel block took 155 milliseconds. Within the present profile hint, the whole GPU steps takes 155 milliseconds, which implies that the reminiscence copy time is accomplished hidden by the kernel compute time and that our optimization reaches the utmost attainable end result.

Using the CUDA streams and its impression on GPU utilization might be seen within the traces of each profilers:

Pipelined Nsight Programs Profiler Timeline (by Writer)
Pipelined PyTorch Profiler Hint (by Writer)

Optimization 4: Prefetching to CUDA

For our ultimate step, we transfer the info copying from the primary coaching loop course of to the info loading course of: Fairly than explicitly calling the copy operate contained in the coaching loop, we assume that the batches returned from the info iterator are already positioned on the GPU.

Within the code block under, we wrap our dataloader with a CUDA-prefetching iterator class. Observe, that this can be a simplified implementation supposed for the needs of demonstration. Extra work could also be required for extra complicated situations (e.g., DDP coaching). Alternatively, chances are you’ll contemplate a third-party implementation similar to torchtnt.utils.information.data_prefetcher.CudaDataPrefetcher:

class DataPrefetcher:
    def __init__(self, loader):
        self.loader = iter(loader)
        self.stream = torch.cuda.Stream()
        self.next_batch = None
        self.preload()

    def preload(self):
        attempt:
            information, targets = subsequent(self.loader)

            with torch.cuda.stream(self.stream):
                with nvtx.annotate("copy batch", colour="yellow"):
                    next_data = information.to(DEVICE, non_blocking=True)
                    next_targets = targets.to(DEVICE, non_blocking=True)
            self.next_batch = (next_data, next_targets)        
        besides:
            self.next_batch = (None, None)

    def __iter__(self):
        return self

    def __next__(self):
        torch.cuda.current_stream().wait_stream(self.stream)
        information, targets = self.next_batch
        self.preload()
        return information, targets


data_iter = DataPrefetcher(train_loader)

for i in vary(TOTAL_STEPS):
    if i == WARMUP_STEPS:
        torch.cuda.synchronize()
        start_time = time.perf_counter()
        profiler.begin()
    elif i == WARMUP_STEPS + PROFILE_STEPS:
        torch.cuda.synchronize()
        profiler.cease()
        end_time = time.perf_counter()

    with nvtx.annotate(f"Batch {i}", colour="blue"):
        with nvtx.annotate("get batch", colour="purple"):
            batch = subsequent(data_iter)
        with nvtx.annotate("Compute", colour="inexperienced"):
            loss = compute_step(mannequin, batch, optimizer)

total_time = end_time - start_time
throughput = PROFILE_STEPS / total_time
print(f"Throughput: {throughput:.2f} steps/sec")

This optimization leads to a throughput of 6.44 steps per second — the identical as our earlier experiment. This could not shock us since we now have already seen that the throughput is certain by the 155 millisecond GPU compute and our optimization has not carried out something to cut back the kernel compute time.

Extra usually, regardless of the removing of the copy name from the primary loop, you could have a tough time discovering a scenario the place this may have a significant impression on efficiency because the name is already being known as asynchronously. Nonetheless, given the minimal adjustments to the coaching loop, chances are you’ll discover this resolution to be cleaner and/or to be extra relevant to be used with high-level libraries that don’t allow fine-grained management of the coaching loop.

Unsurprisingly, the profile traces for this experiment seem practically an identical to the earlier ones. The principle distinction is the location of the yellow “copy information” block within the NVTX row of the CPU part.

Knowledge Prefetching Nsight Programs Profiler Timeline (by Writer)
Knowledge Prefetching PyTorch Profiler Hint (by Writer)

Outcomes

The desk under summarizes the outcomes of our experiments:

Experiment Outcomes (by Writer)

The optimizations, which had been pushed by means of Nsight Programs profiler, resulted in an total enhance of 2.17X to the runtime efficiency.

Abstract

GPU hunger is a standard efficiency bottleneck that may have a devastating impression on the effectivity and prices of AI/ML workloads. On this submit, we demonstrated the way to use the Nsight Programs profiler to review the causes of the efficiency bottleneck and take knowledgeable steps in the direction of their decision. Alongside the way in which, we emphasised the distinctive capabilities of Nsight Programs profiler when in comparison with the built-in framework-centric PyTorch Profiler — particularly its deep system-level visibility.

Our focus, on this submit has been on the host-to-device information copy that sometimes happens at the start of the coaching step. Nonetheless, data-transfer bottlenecks can seem at completely different phases of coaching. In a sequel to this submit we intend to repeat our nsys profiling evaluation on information copies moving into the wrong way — from the system to the host. Keep tuned!

Hackers declare to hack Resecurity, agency says it was a honeypot

0


Replace: Article up to date to mirror that the ShinyHunters says they weren’t concerned on this exercise. We’ve up to date our story and title.

Menace actors related to the “Scattered Lapsus$ Hunters” (SLH) declare to have breached the programs of cybersecurity agency Resecurity and stolen inner knowledge, whereas Resecurity says the attackers solely accessed a intentionally deployed honeypot containing faux info used to watch their exercise.

Right now, menace actors revealed screenshots on Telegram of the alleged breach, claiming they stole worker knowledge, inner communications, menace intelligence stories, and shopper info.

Wiz

“We want to announce that we’ve got gained full entry to REsecurity programs,” the group wrote on Telegram, claiming to have stolen “all inner chats and logs”, “full worker knowledge”, “menace intel associated stories”, and a “full shopper listing with particulars.”

Portion of the Telegram post by the threat actors
Portion of the Telegram put up by the menace actors
Supply: BleepingComputer

As proof of their claims, the menace actors revealed screenshots they allege have been stolen from Resecurity, together with what seems to be a Mattermost collaboration occasion exhibiting communications between Resecurity staff and Pastebin personnel concerning malicious content material hosted on the text-sharing platform.

The menace actors, who confer with themselves as “Scattered Lapsus$ Hunters” as a result of alleged overlap between ShinyHunters, Lapsus$, and Scattered Spider menace actors, stated the assault was retaliation for what they declare are ongoing makes an attempt by Resecurity to socially engineer the group and be taught extra about its operations.

The menace actors say Resecurity staff pretended to be consumers through the sale of an alleged Vietnam monetary system database, searching for free samples and extra info.

After publishing this text, the ShinyHunters spokesperson advised BleepingComputer that they weren’t concerned on this exercise. Whereas ShinyHunters has at all times claimed to be a part of Scattered Lapsus$ Hunters, they state they weren’t concerned on this assault.

We’ve up to date our article with this info.

When you’ve got any info concerning this incident or different undisclosed assaults, you’ll be able to contact us confidentially by way of Sign at 646-961-3731 or at ideas@bleepingcomputer.com.

Resecurity says it was a honeypot

Resecurity disputes the menace actor’s claims, stating that the allegedly breached programs should not a part of its official manufacturing infrastructure however have been as an alternative a honeypot designed to draw and monitor the menace actors.

After BleepingComputer contacted Resecurity concerning the declare, they shared a report revealed on December 24, the place the corporate says it first detected a menace actor probing their publicly uncovered programs on November 21, 2025.

The corporate says its DFIR workforce recognized reconnaissance indicators early and logged a number of IP addresses linked to the actor, together with these originating from Egypt and Mullvad VPN providers.

Resecurity stated it responded by deploying a “honeypot” account inside an remoted atmosphere that allowed the menace actor to log in and work together with programs containing faux worker, buyer, and cost knowledge whereas it was being monitored by the researchers.

A honeypot is a intentionally uncovered, monitored system or account designed to lure attackers, permitting them to be noticed and analyzed and to collect intelligence on their exercise with out risking actual knowledge or infrastructure.

The corporate says it populated the honeypot with artificial datasets designed to carefully resemble real-world enterprise knowledge. These included greater than 28,000 artificial client data and over 190,000 artificial cost transaction data, each generated from Stripe’s official API format.

In response to Resecurity, the menace actor started making an attempt to automate knowledge exfiltration in December, producing greater than 188,000 requests between December 12 and December 24 whereas utilizing giant numbers of residential proxy IP addresses.

Throughout this exercise, the corporate says it collected telemetry on the attacker’s ways, strategies, and infrastructure.

Resecurity monitoring activity on honeypot
Resecurity monitoring exercise on honeypot
Supply: Resecurity

Resecurity claims that the attacker briefly uncovered confirmed IP addresses on a number of events resulting from proxy connection failures, and that the intel was reported to regulation enforcement.

After observing further exercise, Resecurity says it added additional faux datasets to check the attacker’s habits, which led to further OPSEC failures and helped slender down the menace actor’s infrastructure.

The agency says it later recognized servers used to automate the assault by way of residential proxies and shared the intelligence with regulation enforcement as properly.

“As soon as the actor was situated utilizing accessible community intelligence and timestamps, a international regulation enforcement group, a accomplice of Resecurity, issued a subpoena request concerning the menace actor,” says Resecurity.

On the time of writing, the menace actors haven’t supplied any additional proof, solely issuing a brand new Telegram put up stating that extra info might be coming quickly.

“Good injury management Resecurity. Extra info coming quickly!,” reads a put up on Telegram.

Whether or not you are cleansing up previous keys or setting guardrails for AI-generated code, this information helps your workforce construct securely from the beginning.

Get the cheat sheet and take the guesswork out of secrets and techniques administration.

Womanizer Coupons: Save 15% in December

0


Since 2014, Womanizer has been satisfying folks with vulvas everywhere in the world. Because of its revolutionary Pleasure Air Expertise that mimics the sensation of oral intercourse, not solely has Womanizer found a option to stimulate the ten,000+ nerve endings within the clitoris in a approach that hadn’t been completed by a intercourse toy earlier than—sure, they have been the primary—however the model may even boast a 100% orgasm charge amongst customers.

As an organization that places sexual pleasure entrance and middle, Womanizer has continued so as to add to their very spectacular lineup of orgasm-inducing toys. They’ve even branched out by creating merchandise just like the Womanizer Duo and Womanizer Duo 2, each of which stimulate the clitoris and G-spot concurrently. (Blended orgasm, anybody?) As lately as March 2025, Womanizer launched their newest toy, Womanizer Improve, the primary toy of its variety as a result of it permits the consumer to decide on between the Pleasure Air Expertise or conventional vibrations. I used to be lucky sufficient to evaluation the Improve for WIRED, giving it a 7/10 due to its means to face by its phrase and ship me one heck of an orgasm.

However as a result of the Improve is only one of dozens of Womanizer merchandise which have hit the market within the final 11 years, I’m the primary to confess that it may be tough to decide on which one is finest for you. That’s the place Womanizer coupons come into play—as a result of nobody ought to need to determine on simply one in every of their implausible intercourse toys.

Save 12% on Every little thing With Our Unique Womanizer Coupon

In case you’ve been desirous to attempt Womanizer, however you have been holding out for a sale or deal, then I’m completely satisfied to announce that we now have an awesome one for you. At checkout, use the Womanizer promo code and also you’ll rating 12% off all the things sitewide, together with sale merchandise.

What Makes the Womanizer Premium 2 so In style?

Overwhelmed and unsure the place it is best to start your Womanizer journey? Womanizer Premium 2 is the proper begin to a life-long love affair with Womanizer. It’s straightforward to make use of, has 12 depth ranges, and you may even set it to Womanizer Autopilot so you’ll be able to focus 100% on being within the second. It’s additionally waterproof do you have to wish to experiment with its sensations within the bathe or bathtub.

Get 15% off Sitewide With a Womanizer Coupon Code

Seeking to stage up on the Womanizer offers? In case you join on the web site, you’ll get a Womanizer coupon code emailed on to you. Legitimate for seven days, this distinctive code will get you 15% off all the things on the positioning and may even be mixed with different Womanizer reductions.

This offers you an awesome alternative to buy the Womanizer Premium 2, so your unique Premium has a buddy. In case you journey rather a lot for work or for pleasure and want one thing smaller, however simply as highly effective, then put that promo code towards the Womanizer Liberty 2 or Womanizer Starlet Snow. Each are perfect for the one who’s at all times on the go, but additionally prioritizes sexual pleasure.

Store Womanizer Gross sales and Stand up to 50% off Intercourse Toys

Womanizer isn’t simply nice at preserving folks with vulva knee-deep in orgasms, however doing so together with your price range in thoughts. As a result of sexual pleasure must be inexpensive and accessible for everybody, the Womanizer sale provides as much as 50% off sure merchandise always. It’s an awesome choice of Womanizer sale objects that showcase simply how various the model is. On the sale web page, you gained’t simply discover Pleasure Air Expertise intercourse toys, however vibrators and penis strokers too. It’s a good way to get your self just a little one thing and really feel good figuring out that it was a complete cut price.

How one can Get a Free Toy With Buy

Let’s be sincere: the most effective issues in life are free. As a result of Womanizer is aware of that and realizes all of us deserve a freebie from time-to-time, they wish to make your day. With each Womanizer order over $199, you get a free Womanizer toy at checkout. Select between the Womanizer OG, the Womanizer Basic 2, or We-Vibe Bond. All of which make a wonderful present for your self from Womanizer or a present for somebody you’re keen on.

Take pleasure in Free Transport on Your Womanizer Order

It doesn’t matter what coupon code you’re utilizing, sale objects you’re buying, or reductions you’ve got, each Womanizer order over $30 will get you free delivery all 12 months spherical. In case you don’t like your product for no matter purpose, know that as a Womanizer buyer you’ll be able to store for intercourse toys risk-free because of their 100 Day Pleasure Assure. Additionally, all merchandise embody a 5-year guarantee, so as to be assured of their high quality.

Save 15% With a Womanizer Pupil Low cost

In case you’re nonetheless in class, Womanizer provides 15% off all merchandise with its scholar low cost. You simply must register your telephone quantity to confirm your scholar standing. In case you’re not a scholar, however are a graduate, instructor, healthcare employee, first responder, low-income, navy personnel, a mum or dad, or a charity employee, you can also get pleasure from 15% off all the things. Womanizer has teamed up with Pupil Beans and Beans iD to supply unique reductions for a variety of various teams. Sexual pleasure is a human proper and Womanizer needs all of us to train that proper with the assistance of reductions and coupon codes.

Creating Excel tables with putexcel half 3: Writing customized reviews for arbitrary variables

0


In my final submit, I demonstrated how one can use putexcel to recreate widespread Stata output in Microsoft Excel. Right now I need to present you how one can create customized reviews for arbitrary variables. I’m going to create tables that mix cell counts with row percentages, and means with commonplace deviations. However you may modify the examples beneath to incorporate column percentages, percentiles, commonplace errors, confidence intervals or any statistic. I’m additionally going to cross the variable names into my applications utilizing native macros. This can enable me to create the identical report for arbitrary variables by merely assigning new variable names to the macros. You possibly can prolong this concept by making a do-file for every report and passing the variable names into the do-files. That is one other essential step towards our aim of automating the creation of reviews in Excel.

Right now’s weblog submit is prolonged and comprises a number of massive code blocks. It’s because every instance comprises the code from the earlier instance together with new traces of code. This lets you see the brand new code within the context of the general program nevertheless it additionally makes the submit seem longer than it’s. A lot of the code within the code blocks is identical from instance to instance.

Instance 1: Writing returned outcomes to Excel

Let’s start by utilizing tabulate to create a matrix of cell counts for intercourse and race. I might sort


tabulate intercourse race, matcell(cellcounts)

however I would really like the power to tabulate any two categorical variables. So I first retailer intercourse within the native macro RowVar and the race within the native macro ColVar. Now I can tabulate intercourse and race utilizing their corresponding native macros.


. native RowVar = "intercourse"

. native ColVar = "race"

. tabulate `RowVar' `ColVar', matcell(cellcounts)

           |               Race
       Intercourse |     Black      Different      White |     Complete
-----------+---------------------------------+----------
    Feminine |       101         12        563 |       676 
      Male |        75         10        506 |       591 
-----------+---------------------------------+----------
     Complete |       176         22      1,069 |     1,267 

The cell counts are saved within the matrix cellcounts.


. matrix listing cellcounts

cellcounts[2,3]
     c1   c2   c3
r1  101   12  563
r2   75   10  506

I can sort return listing to see an inventory of scalars returned by tabulate. The whole variety of observations is saved within the scalar r(N), the variety of rows is saved in r(r), and the variety of columns is saved in r(c).


. return listing

scalars:
                  r(N) =  1267
                  r(r) =  2
                  r(c) =  3

I can retailer these scalars in native macros in order that I can use them later.


. native TotalCount = r(N)

. native RowCount = r(r)

. native ColCount = r(c)

Instance 2: Looping over rows and columns

I can use the saved row and column counts to loop over every cell of the matrix cellcounts.


. forvalues row = 1/`RowCount' {
         forvalues col = 1/`ColCount' {
                 native cellcount = cellcounts[`row',`col']
                 show "cellcounts[`row',`col'] = `cellcount'"
         }
}
cellcounts[1,1] = 101
cellcounts[1,2] = 12
cellcounts[1,3] = 563
cellcounts[2,1] = 75
cellcounts[2,2] = 10
cellcounts[2,3] = 506

Looping over every cell within the matrix permits us to format the quantity in every cell and/or use the quantity to calculate one other amount comparable to a share.

Instance 3: The char() perform

The rows and columns of matrices are listed with numbers. The rows of Excel tables are listed with numbers however the columns are listed with letters. I can translate the column variety of a matrix to a column letter in Excel utilizing the char() perform. The argument of the char() perform is an ASCII quantity and the perform returns the corresponding ASCII letter. For instance, char(65) returns the letter “A”, char(66) returns the letter “B”, and so forth.


. show char(65)
A

. show char(66)
B

. show char(67)
C

I can use the perform char(64 + `col’) in my loop to translate the column variety of the matrix to the column letter of the Excel desk. Line 4 in Code block 1 beneath shops the cell identify to the native macro Cell. I might have used “`row’” in line 4 relatively than string(`row’). However I’ll finally want the string() perform so I’m utilizing it right here for consistency. Line 5 then makes use of putexcel to jot down the worth of Cell to the corresponding cell in Excel.

Code block 1: Looping with char()


putexcel set putexcel3.xlsx, sheet(example3) exchange
forvalues row = 1/`RowCount' {
    forvalues col = 1/`ColCount' {
	native Cell = char(64 + `col') + string(`row')
	putexcel `Cell' = "`Cell'", hcenter
    }
}

Instance 4: Writing cell counts to Excel

Code block 2 demonstrates how one can write the cell counts from the matrix to the Excel desk. Line 4 of Code block 2 shops the cell depend from the matrix cellcounts to the native macro CellContents, Line 5 shops the vacation spot cell within the Excel desk to the native macro Cell, and Line 6 makes use of putexcel to jot down CellContents to Cell within the Excel desk.

Code block 2:


putexcel set putexcel3.xlsx, sheet(example4) modify
forvalues row = 1/`RowCount' {
    forvalues col = 1/`ColCount' {
	native CellContents = cellcounts[`row',`col']
	native Cell = char(64 + `col') + string(`row')
	putexcel `Cell' = "`CellContents'", hcenter
    }
}

Instance 5: Writing row percentages to Excel

I’d want to write the row percentages to the Excel desk relatively than the cell counts. I’ll want each the cell counts and the row totals to calculate the row percentages. The tabulate command in Line 3 of Code block 3 beneath shops the row totals to the vector rowtotals. Line 11 shops the cell counts to the native macro cellcount. Line 12 calculates and codecs the cell share and shops it to the native macro cellpercent. Line 13 provides the “%” image to cellpercent and shops the ensuing string to the native macro CellContents.

Code block 3:


native RowVar = "intercourse"
native ColVar = "race"
tabulate `RowVar' if !lacking(`ColVar'), matcell(rowtotals)
tabulate `RowVar' `ColVar', matcell(cellcounts)
native RowCount = r(r)
native ColCount = r(c)

putexcel set putexcel3.xlsx, sheet(example5) modify
forvalues row = 1/`RowCount' {
    forvalues col = 1/`ColCount' {
	native cellcount = cellcounts[`row',`col']
	native cellpercent = string(100*`cellcount'/rowtotals[`row',1],"%9.1f")
	native CellContents = "`cellpercent'%"
	native Cell = char(64 + `col') + string(`row')
	putexcel `Cell' = "`CellContents'", proper
    }
}

Instance 6: Writing cell counts and row percentages to Excel

I might write each the cell depend and the row share to every cell. I can do that by modifying Line 13 from Code block 3 above. Line 13 in Code block 4 beneath shops each cellcount and cellpercent to the native macro CellContents.

Code block 4:


native RowVar = "intercourse"
native ColVar = "race"
tabulate `RowVar' if !lacking(`ColVar'), matcell(rowtotals)
tabulate `RowVar' `ColVar', matcell(cellcounts)
native RowCount = r(r)
native ColCount = r(c)

putexcel set putexcel3.xlsx, sheet(example6) modify
forvalues row = 1/`RowCount' {
    forvalues col = 1/`ColCount' {
	native cellcount = cellcounts[`row',`col']
	native cellpercent = string(100*`cellcount'/rowtotals[`row',1],"%9.1f")
	native CellContents = "`cellcount' (`cellpercent'%)"
	native Cell = char(64 + `col') + string(`row')
	putexcel `Cell' = "`CellContents'", proper
    }
}

Instance 7: Including row labels to Excel tables

Subsequent I wish to add row labels to my Excel desk. I might sort “Feminine” and “Male” within the Excel desk however I would really like to have the ability to change the row variable on the prime of my program and label the rows robotically.

Worth labels comparable to “0=Feminine” and “1=Male” are outlined utilizing label outline and the labels are connected to a variable utilizing label values. If I sort describe intercourse, I can see that the worth label connected to intercourse is known as SexLabel.


. describe intercourse

              storage   show    worth
variable identify   sort    format     label      variable label
------------------------------------------------------------------------------
intercourse             byte    %9.0g      SexLabel   Intercourse

And I can view the definition of SexLabel by typing label listing SexLabel.


. label listing SexLabel
SexLabel:
           0 Feminine
           1 Male

I can entry the entire details about the worth labels for a variable utilizing solely the variable identify. The identify of the present row variable, intercourse, is saved within the native macro RowVar. I can retailer the worth label for RowVar to the native macro RowValueLabel utilizing the macro listing perform beneath.


. native RowVar = "intercourse"

. native RowValueLabel : worth label `RowVar'

. show "`RowValueLabel'"
SexLabel

I can use levelsof to retailer the numeric classes of RowVar to the native macro RowLevels.


. levelsof `RowVar', native(RowLevels)
0 1

. show "`RowLevels'"
0 1

I can refer to every of the numeric classes in RowLevels utilizing the phrases() perform. For instance, the primary “phrase” within the native macro RowLevels is “0”. I can retailer this “phrase” within the native macro RowValueLabelNum utilizing the next macro listing perform.


. native RowValueLabelNum = phrase("`RowLevels'", 1)

. show "`RowValueLabelNum'"
0

I can then retailer the label related to “0” to the native macro RowLabel utilizing the next macro listing perform.


. native RowLabel : label `RowValueLabel' `RowValueLabelNum'

. show "`RowLabel'"
Feminine

I might use the identical strategy to retailer the second class of intercourse.


. native RowValueLabelNum = phrase("`RowLevels'", 2)

. show "`RowValueLabelNum'"
1

. native RowLabel : label `RowValueLabel' `RowValueLabelNum'

. show "`RowLabel'"
Male

Code block 5 beneath incorporates these instructions to extract the worth labels for RowVar and makes use of putexcel to jot down the labels to my Excel desk.

I start by shifting the cells of my Excel desk down one row and one column to the best. This can make room for the row labels, and the column labels I’ll add later. I shift the desk down and proper in traces 16 and 23 beneath by including 1 to the arguments of the char() and string() features.

Line 8 shops the worth label of RowVar to the native macro RowValueLabel. Line 9 retailer the numeric classes of RowVar to the native macro RowLevels.

Traces 14-17 extract the label related to every numeric class and writes it to my Excel desk.

Code block 5:


native RowVar = "intercourse"
native ColVar = "race"
tabulate `RowVar' if !lacking(`ColVar'), matcell(rowtotals)
tabulate `RowVar' `ColVar', matcell(cellcounts)
native RowCount = r(r)
native ColCount = r(c)

native RowValueLabel : worth label `RowVar'
levelsof `RowVar', native(RowLevels)

putexcel set putexcel3.xlsx, sheet(example7) modify
forvalues row = 1/`RowCount' {

    native RowValueLabelNum = phrase("`RowLevels'", `row')
    native CellContents : label `RowValueLabel' `RowValueLabelNum'
    native Cell = char(64 + 1) + string(`row'+1)
    putexcel `Cell' = "`CellContents'", proper

    forvalues col = 1/`ColCount' {
	native cellcount = cellcounts[`row',`col']
	native cellpercent = string(100*`cellcount'/rowtotals[`row',1],"%9.1f")
	native CellContents = "`cellcount' (`cellpercent'%)"
	native Cell = char(64 + `col' + 1) + string(`row' + 1)
	putexcel `Cell' = "`CellContents'", proper
    }
}

Instance 8: Including row totals to an Excel desk

Subsequent I wish to add row totals to my Excel desk. I saved the row totals to the matrix rowtotals earlier in order that I might calculate row percentages. So all I must do write the values from that matrix to my Excel desk.

Line 3 of Code block 6 beneath shops the row totals to the matrix rowtotals. Traces 19-21 extract the entire for every row from rowtotals and writes it to the Excel desk. Be aware that I’m writing the row totals one column to the best of the Excel desk by including 2 to the argument of the char() perform in line 20.

Code block 6:


native RowVar = "intercourse"
native ColVar = "race"
tabulate `RowVar' if !lacking(`ColVar'), matcell(rowtotals)
tabulate `RowVar' `ColVar', matcell(cellcounts)
native RowCount = r(r)
native ColCount = r(c)

levelsof `RowVar', native(RowLevels)
native RowValueLabel : worth label `RowVar'

putexcel set putexcel3.xlsx, sheet(example8) modify
forvalues row = 1/`RowCount' {

    native RowValueLabelNum = phrase("`RowLevels'", `row')
    native CellContents : label `RowValueLabel' `RowValueLabelNum'
    native Cell = char(64 + 1) + string(`row'+1)
    putexcel `Cell' = "`CellContents'", proper
	
    native CellContents = rowtotals[`row',1]
    native Cell = char(64 + `ColCount' + 2) + string(`row' + 1)
    putexcel `Cell' = "`CellContents'", hcenter

    forvalues col = 1/`ColCount' {
	native cellcount = cellcounts[`row',`col']
	native cellpercent = string(100*`cellcount'/rowtotals[`row',1],"%9.1f")
	native CellContents = "`cellcount' (`cellpercent'%)"
	native Cell = char(64 + `col' + 1) + string(`row' + 1)
	putexcel `Cell' = "`CellContents'", proper
    }
}

Instance 9: Including column labels and totals to an Excel desk

I can add column labels and totals utilizing the identical technique. I start by saving the column totals to the matrix coltotals in line 4 of Code block 7 beneath. Line 12 shops the worth label for ColVar to the native macro ColLevels and line 13 shops the numeric classes of ColVar to the native macro ColLevels.

Traces 34-43 write the worth labels and column totals to the Excel spreadsheet. I solely want to jot down the data to Excel one time so I’ve specified that these traces run solely when the situation if `row’==1 is met.

Code block 7:


native RowVar = "intercourse"
native ColVar = "race"
tabulate `RowVar' if !lacking(`ColVar'), matcell(rowtotals)
tabulate `ColVar' if !lacking(`RowVar'), matcell(coltotals)
tabulate `RowVar' `ColVar', matcell(cellcounts)
native RowCount = r(r)
native ColCount = r(c)

native RowValueLabel : worth label `RowVar'
levelsof `RowVar', native(RowLevels)

native ColValueLabel : worth label `ColVar'
levelsof `ColVar', native(ColLevels)

putexcel set putexcel3.xlsx, sheet(example9) modify
forvalues row = 1/`RowCount' {

    native RowValueLabelNum = phrase("`RowLevels'", `row')
    native CellContents : label `RowValueLabel' `RowValueLabelNum'
    native Cell = char(64 + 1) + string(`row'+1)
    putexcel `Cell' = "`CellContents'", proper
	
    native CellContents = rowtotals[`row',1]
    native Cell = char(64 + `ColCount' + 2) + string(`row' + 1)
    putexcel `Cell' = "`CellContents'", hcenter

    forvalues col = 1/`ColCount' {
	native cellcount = cellcounts[`row',`col']
	native cellpercent = string(100*`cellcount'/rowtotals[`row',1],"%9.1f")
	native CellContents = "`cellcount' (`cellpercent'%)"
	native Cell = char(64 + `col' + 1) + string(`row' + 1)
	putexcel `Cell' = "`CellContents'", proper
	
	if `row'==1 {
		native ColValueLabelNum = phrase("`ColLevels'", `col')
		native CellContents : label `ColValueLabel' `ColValueLabelNum'
		native Cell = char(64 + `col' + 1) + string(1)
		putexcel `Cell' = "`CellContents'", hcenter
			
		native CellContents = coltotals[`col',1]
		native Cell = char(64 + `col' + 1) + string(`RowCount' + 2)
		putexcel `Cell' = "`CellContents'", hcenter
	}
    }
}

Instance 10: Formating an Excel desk

Now that I’ve the entire numbers and labels in my Excel desk, I wish to add some traces to make it simpler to learn.

Traces 49-50 in Code block 8 beneath, write the entire depend to the underside proper nook of the desk. Traces 52-53 label the column complete column and contours 55-56 label the row complete row.

I wish to add traces to my desk by specifying cell ranges. I might do that extra succinctly than I’ve in traces 58-73 however the code could be troublesome to learn. Traces 58-61 retailer the cells that outline the 4 corners of the desk to the native macros UpperLeft, UpperRight, BottomLeft, and BottomRight. Traces 63-73 use these 4 cells to outline the cell ranges used so as to add the traces to the Excel desk.

Code block 8:


native RowVar = "intercourse"
native ColVar = "race"
tabulate `RowVar' if !lacking(`ColVar'), matcell(rowtotals)
tabulate `ColVar' if !lacking(`RowVar'), matcell(coltotals)
tabulate `RowVar' `ColVar', matcell(cellcounts)
native RowCount = r(r)
native ColCount = r(c)
native TotalCount = r(N)

levelsof `RowVar', native(RowLevels)
native RowValueLabel : worth label `RowVar'

levelsof `ColVar', native(ColLevels)
native ColValueLabel : worth label `ColVar'

putexcel set putexcel3.xlsx, sheet(example10) modify
forvalues row = 1/`RowCount' {

    native RowValueLabelNum = phrase("`RowLevels'", `row')
    native CellContents : label `RowValueLabel' `RowValueLabelNum'
    native Cell = char(64 + 1) + string(`row'+1)
    putexcel `Cell' = "`CellContents'", proper
	
    native CellContents = rowtotals[`row',1]
    native Cell = char(64 + `ColCount' + 2) + string(`row' + 1)
    putexcel `Cell' = "`CellContents'", hcenter

    forvalues col = 1/`ColCount' {
	native cellcount = cellcounts[`row',`col']
	native cellpercent = string(100*`cellcount'/rowtotals[`row',1],"%9.1f")
	native CellContents = "`cellcount' (`cellpercent'%)"
	native Cell = char(64 + `col' + 1) + string(`row' + 1)
	putexcel `Cell' = "`CellContents'", proper
	
	if `row'==1 {
		native ColValueLabelNum = phrase("`ColLevels'", `col')
		native CellContents : label `ColValueLabel' `ColValueLabelNum'
		native Cell = char(64 + `col' + 1) + string(1)
		putexcel `Cell' = "`CellContents'", hcenter
			
		native CellContents = coltotals[`col',1]
		native Cell = char(64 + `col' + 1) + string(`RowCount' + 2)
		putexcel `Cell' = "`CellContents'", hcenter
	}
    }
}

native Cell = char(64 + `ColCount' + 2) + string(`RowCount' + 2)
putexcel `Cell' = "`TotalCount'", hcenter

native Cell = char(64 + `ColCount' + 2) + string(1)
putexcel `Cell' = "Complete", hcenter

native Cell = char(64 + 1) + string(`RowCount' + 2)
putexcel `Cell' = "Complete", proper

native UpperLeft = char(64 + 1)+ string(1)
native UpperRight = char(64 + `ColCount' + 2)+ string(1)
native BottomLeft = char(64 + 1)+ string(`RowCount'+2)
native BottomRight = char(64 + `ColCount' + 2)+ string(`RowCount'+2)

native CellRange =  "`UpperLeft':`UpperRight'"
putexcel `CellRange', border(backside)

native CellRange =  "`BottomLeft':`BottomRight'"
putexcel `CellRange', border(prime)

native CellRange =  "`UpperLeft':`BottomLeft'"
putexcel `CellRange', border(proper)

native CellRange =  "`UpperRight':`BottomRight'"
putexcel `CellRange', border(left)

Instance 11: Creating Excel tables for arbitrary variables

At this level, you might be questioning whether it is value investing the time vital to jot down tables to Excel utilizing this technique. I might have created my Excel desk manually in a fraction of the time it took me to jot down this program. But when I need to create this desk, and tables prefer it many instances sooner or later, this technique pays massive dividends in time saved. For instance, I can swap the rows and columns of my desk by merely switching the variable assignments in traces 1 and a pair of of Code block 9 beneath.

Code block 9:


native RowVar = "race"
native ColVar = "intercourse"
tabulate `RowVar' if !lacking(`ColVar'), matcell(rowtotals)
tabulate `ColVar' if !lacking(`RowVar'), matcell(coltotals)
tabulate `RowVar' `ColVar', matcell(cellcounts)
native RowCount = r(r)
native ColCount = r(c)
native TotalCount = r(N)

levelsof `RowVar', native(RowLevels)
native RowValueLabel : worth label `RowVar'

levelsof `ColVar', native(ColLevels)
native ColValueLabel : worth label `ColVar'

putexcel set putexcel3.xlsx, sheet(example11) modify
forvalues row = 1/`RowCount' {

    native RowValueLabelNum = phrase("`RowLevels'", `row')
    native CellContents : label `RowValueLabel' `RowValueLabelNum'
    native Cell = char(64 + 1) + string(`row'+1)
    putexcel `Cell' = "`CellContents'", proper
	
    native CellContents = rowtotals[`row',1]
    native Cell = char(64 + `ColCount' + 2) + string(`row' + 1)
    putexcel `Cell' = "`CellContents'", hcenter

    forvalues col = 1/`ColCount' {
	native cellcount = cellcounts[`row',`col']
	native cellpercent = string(100*`cellcount'/rowtotals[`row',1],"%9.1f")
	native CellContents = "`cellcount' (`cellpercent'%)"
	native Cell = char(64 + `col' + 1) + string(`row' + 1)
	putexcel `Cell' = "`CellContents'", proper
	
	if `row'==1 {
		native ColValueLabelNum = phrase("`ColLevels'", `col')
		native CellContents : label `ColValueLabel' `ColValueLabelNum'
		native Cell = char(64 + `col' + 1) + string(1)
		putexcel `Cell' = "`CellContents'", hcenter
			
		native CellContents = coltotals[`col',1]
		native Cell = char(64 + `col' + 1) + string(`RowCount' + 2)
		putexcel `Cell' = "`CellContents'", hcenter
	}
    }
}

native Cell = char(64 + `ColCount' + 2) + string(`RowCount' + 2)
putexcel `Cell' = "`TotalCount'", hcenter

native Cell = char(64 + `ColCount' + 2) + string(1)
putexcel `Cell' = "Complete", hcenter

native Cell = char(64 + 1) + string(`RowCount' + 2)
putexcel `Cell' = "Complete", proper

native UpperLeft = char(64 + 1)+ string(1)
native UpperRight = char(64 + `ColCount' + 2)+ string(1)
native BottomLeft = char(64 + 1)+ string(`RowCount'+2)
native BottomRight = char(64 + `ColCount' + 2)+ string(`RowCount'+2)

native CellRange =  "`UpperLeft':`UpperRight'"
putexcel `CellRange', border(backside)

native CellRange =  "`BottomLeft':`BottomRight'"
putexcel `CellRange', border(prime)

native CellRange =  "`UpperLeft':`BottomLeft'"
putexcel `CellRange', border(proper)

native CellRange =  "`UpperRight':`BottomRight'"
putexcel `CellRange', border(left)

Instance 12: Creating tables for steady variables over ranges of a categorical variable

Examples 1-11 demonstrated how one can create a desk for 2 categorical variables. I might use an analogous strategy to create a desk of abstract statistics for a steady variable over ranges of a categorical variable.

Let’s start by making a column header for the specific variable race. The code in Code block 10 beneath seems to be just like code utilized in Examples 1-11. The one unfamiliar code seems in line 6 the place I retailer the variable label to the native macro ColVarLabel. I then write the variable label to the merged cells above the column labels in traces 25 and 26.

Code block 10:


native ColVar = "race"
tabulate `ColVar', matcell(coltotals)
native ColCount = r(r)
native TotalCount = r(N)

6 native ColVarLabel : variable label `ColVar'

levelsof `ColVar', native(ColLevels)
native ColValueLabel : worth label `ColVar'

putexcel set putexcel3.xlsx, sheet(example12) modify
forvalues col = 1/`ColCount' {
    native ColValueLabelNum = phrase("`ColLevels'", `col')
    native CellContents : label `ColValueLabel' `ColValueLabelNum'
    native Cell = char(64 + `col' + 1) + string(2)
    putexcel `Cell' = "`CellContents'", hcenter
	
    native cellcount = coltotals[`col',1]
    native cellpercent = string(100*`cellcount'/`TotalCount',"%9.1f")
    native CellContents = "`cellcount' (`cellpercent'%)"
    native Cell = char(64 + `col' + 1) + string(3)
    putexcel `Cell' = "`CellContents'", proper
}
	
native CellRange = char(64 + 2) + string(1) + ":" + char(64 + `ColCount' + 1) + string(1)
putexcel `CellRange' = "`ColVarLabel'", merge hcenter daring border(backside, medium)

native CellRange = char(64 + 1) + string(2) + ":" + char(64 + `ColCount' + 2) + string(2)
putexcel `CellRange', border(backside, double)

native Cell = char(64 + `ColCount' + 2) + string(2)
putexcel `Cell' = "Complete", hcenter

native Cell = char(64 + `ColCount' + 2) + string(3)
putexcel `Cell' = "`TotalCount'", hcenter

Instance 12 (continued):

Subsequent I can add a row that comprises the imply and commonplace deviation of age for every stage of race. I start by storing age to the native macro ContVar in line 2 of Code block 11 beneath.

Traces 12 calculates the imply and commonplace deviation when the situation if `ColVar’==`ColLevel’ is true. You could be tempted to make use of the situation if `ColVar’==`col’ however it’s best to keep away from this temptation. The degrees of categorical variables are sometimes numbered with sequential integers beginning with one (e.g. “1,2,3..”). However this isn’t at all times true. For instance, indicator variables, comparable to intercourse, are numbered beginning at zero. intercourse has two classes so if we used the situation if intercourse==1 we might see the outcomes for males however we might see no outcomes for the situation if intercourse==2. And we’d by no means even contemplate the situation if intercourse==0.


. summarize age if intercourse==1

    Variable |        Obs        Imply    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
         age |        590    48.25254    16.90086         20         74

. summarize age if intercourse==2

    Variable |        Obs        Imply    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
         age |          0

Line 21 codecs the returned imply, r(imply), to show with one decimal place and shops it to the native macro RowMean. Line 22 codecs the returned commonplace deviation, r(sd), to show with one decimal place and shops it to the native macro RowSD. Line 15 combines RowMean and RowSD and retailer the end result to the native macro CellContents.

Traces 20-25 repeat these calculations for the column complete and contours 26-27 write the variable identify age to the Excel desk.

Code block 11:


native ColVar = "race"
native ContVar = "age"
tabulate `ColVar', matcell(coltotals)
native ColCount = r(r)

levelsof `ColVar', native(ColLevels)
native ColValueLabel : worth label `ColVar'

putexcel set putexcel3.xlsx, sheet(example12) modify
forvalues col = 1/`ColCount' {
    native ColLevel = phrase("`ColLevels'", `col')
    quietly summarize `ContVar' if `ColVar'==`ColLevel'
    native RowMean = string(`r(imply)', "%9.1f")
    native RowSD = string(`r(sd)', "%9.1f")
    native CellContents = "`RowMean' (`RowSD')"
    native Cell = char(64 + `col' + 1) + string(5)
    putexcel `Cell' = "`CellContents'", proper
}

quietly summarize `ContVar' if !lacking(`ColVar')
native RowMean = string(`r(imply)', "%9.1f")
native RowSD = string(`r(sd)', "%9.1f")
native CellContents = "`RowMean' (`RowSD')"
native Cell = char(64 + `ColCount' + 2) + string(5)
putexcel `Cell' = "`CellContents'", proper

native Cell = char(64 + 1) + string(5)	
putexcel `Cell' = "`ContVar'", proper daring

Conclusion

I hope that this weblog submit has impressed you relatively than intimidated you. Creating automated reviews that work for arbitrary variables is rarely a fast and simple activity. However producing these sorts of tables manually might be time consuming and irritating. You probably have ever spent hours formating tables for a manuscript solely to have a reviewer insist that you just modify your evaluation in a approach that modifications the entire tables, you understand what I’m speaking about. Or maybe you run weekly or month-to-month reviews on information that’s commonly up to date. The time you put money into writing applications to automate your Excel tables can save much more time in the long term. And it’ll elimiate errors that may happen whenever you create tables manually.

Subsequent time I’ll present you how one can write your personal Stata instructions to create Excel tables.



Flip quarter-hour into progress with a lifetime of Headway’s e-book summaries for $40

0


These scientific discoveries introduced us pleasure in 2025

0

At its coronary heart, science is in regards to the pleasure of discovery — extending the boundaries of what we all know and studying extra in regards to the world and our place in it. Analysis during the last 12 months was no exception, delighting us with the foolish to the profound. So take a minute to savor 2025’s smattering of scientific treats. In any case, analysis reveals that cultivating small moments of pleasure is fairly good for you.

Tiny elephant nestles in human cells

In an enormous (and small) feat, scientists 3-D printed a  tiny elephant inside residing human cells.DBenitostock/Getty Pictures

Measuring simply 10 micrometers from trunk to tail, a tiny elephant was 3-D printed inside residing cells. The minuscule feat might result in new methods to manage cells, researchers say. Extra pleasant nonetheless is the playful absurdism of a really, very tiny behemoth.


Gender hole shrinks for cooking and cleansing

A man vacuums while a woman sits on a couch reading with two children.
Married U.S. girls nonetheless spend extra time on house responsibilities than married males. However the gender hole in time spent on historically female chores, resembling cleansing and laundry, is shrinking.Maskot/Getty Pictures Plus

Males are spending extra time on jobs that historically fall to girls.  During the last twenty years, the hole between girls’s and males’s efforts shrank for duties resembling cooking and cleansing. The rating isn’t even but — girls are nonetheless placing in 2.5 hours for each man’s one hour. However nonetheless, as one sociologist places it, “there’s a hopeful story right here.” Right here’s hoping the hole shrinks much more in 2026.


Vaccines could possibly be delivered with floss

Purple-gloved hands hold a small white mouse while a second set of blue-gloved hands gently run dental floss through the rodent's teeth.
A pair of researchers gently floss a mouse’s backside incisors, delivering a vaccine that might assist shield the animal in opposition to influenza.Paul Stonum

This previous 12 months discovered some researchers gently flossing mice. It sounds humorous, however the murine dental care was in service of a worthy purpose — the potential for vaccinations with dental floss. Mice have been vaccinated in opposition to the flu with nothing greater than a mild flossing with specialised floss. This ingenuity could result in handy and fewer painful methods to construct up immunity.


Excellent news amid dangerous information for axolotls

An overhead shots shows two axolotls in blue-green looking water.
Captive-bred axolotls (two proven) have been launched into two habitats to check their survival and most well-liked environmental situations. Reintroduction might assist scientists save the amphibian from extinction.Horacio Mena

Wild axolotls are dealing with extinction. That’s the dangerous information. However take coronary heart: Cutie-pie axolotls bred in captivity might be able to replenish their wild brethren’s inhabitants, a conservation examine suggests. These adored aquatic salamanders may have a shot at survival.


Posterior disguises assist blowflies mix in

What looks like a little bright-eyed face with antennas is in fact the butt end of a blowfly larva.
This blowfly larva breathes by means of holes that carefully resemble termite eyes, serving to the impostor mix in inside a termite nest.Vlad Dinca

A examine of blowflies offers new that means to the time period “butthead.” Larval blowflies squatting in a termite nest in Morocco shocked scientists with their rear-end mimicry. These larvae had false termite faces on their rears, together with antennae, eyes and different small bits, all of which fooled the termites into accepting the interlopers.


Extra butt information, this time for individuals

A man sits on the edge of a hospital bed with a row of partially clad manequins lying in a row behind him with their buttocks visible.
Doctor Takanori Takebe has proven it’s attainable for mammals to get oxygen by means of their anuses. However whether or not it’s attainable and sensible for human sufferers who’ve bother respiratory is an open query.Courtesy of Cincinnati Kids’s Hospital Medical Heart

People breathe air in by means of the nostril and mouth, however there may be one other, much less engaging, entrance. In a heroic act of volunteerism, 27 males gamely accepted an oxygen-rich liquid up the anus and held it in for an hour. Most of those intrepid volunteers tolerated the liquid. Future exams will reveal whether or not these intestinal deliveries can elevate oxygen ranges within the bloodstream.


The sound of two arms clapping, defined

When an individual claps, an air pocket is fashioned throughout the palms. A jet of air streams out of a spot left between the thumb and forefinger, kicking off vibrations within the surrounding air. Researchers noticed an analogous impact utilizing cup-shaped silicone fashions designed to imitate palms slapping collectively.

YICONG FU, CORNELL UNIVERSITY

Clapping is a method we specific pleasure. Now we all know the physics that lets us make this comfortable noise. When two enthusiastic arms meet for a clap, the ensuing sound may be defined by a phenomenon generally known as the Helmholtz resonance. This idea describes the sound made when an individual blows throughout the highest of a glass soda bottle. A Helmholtz resonator is at work as sound waves burst forth from clapping arms, scientists report. 


OK 2025, you’ve taken your bow. We welcome 2026 to the stage, with its scientific joys simply ready to be found.


3 issues Will Douglas Heaven is into proper now


Discovering indicators of life within the uncanny valley

Watching Sora ­movies of Michael Jackson stealing a field of rooster nuggets or Sam Altman biting into the pink meat of a flame-grilled Pikachu has given me flashbacks to an Ed Atkins exhibition at Tate Britain I noticed a number of months in the past. Atkins is among the most influential and unsettling British artists of his era. He’s finest recognized for hyper-detailed CG animations of himself (pore-perfect pores and skin, janky motion) that play with the digital illustration of human feelings. 

Nonetheless from ED ATKINS PIANOWORK 2 2023

COURTESY: THE ARTIST, CABINET GALLERY, LONDON, DÉPENDANCE, BRUSSELS, GLADSTONE GALLERY

In The Worm we see a CGI Atkins make a long-distance name to his mom throughout a covid lockdown. The audio is from a recording of an precise dialog. Are we watching Atkins cry or his avatar? Our consideration glints between two realities. “When an actor breaks character throughout a scene, it’s often known as corpsing,” Atkins has mentioned. “I would like every part I make to corpse.” Subsequent to Atkins’s work, generative movies appear to be cardboard cutouts: lifelike however not alive.

A darkish and soiled guide a few speaking dingo

What’s it wish to be a pet? Australian writer Laura Jean McKay’s debut novel, The Animals in That Nation, will make you want you’d by no means requested. A flu-like pandemic leaves individuals with the power to listen to what animals are saying. If that sounds too Dr. Dolittle on your tastes, relaxation assured: These animals are bizarre and nasty. Lots of the time they don’t even make any sense. 

cover of book

SCRIBE

With everyone now speaking to their computer systems, McKay’s guide resets the anthropomorphic lure we’ve all fallen into. It’s an excellent evocation of what a nonhuman thoughts may includeand a meditation on the onerous limits of communication.