Thursday, March 5, 2026
Home Blog

Cisco flags extra SD-WAN flaws as actively exploited in assaults

0


​Cisco has flagged two extra Catalyst SD-WAN Supervisor safety flaws as actively exploited within the wild, urging directors to improve susceptible units.

Catalyst SD-WAN Supervisor (previously vManage) is community administration software program that permits admins to observe and handle as much as 6,000 Catalyst SD-WAN units from a single centralized dashboard.

“In March 2026, the Cisco PSIRT grew to become conscious of lively exploitation of the vulnerabilities which can be described in CVE-2026-20128 and CVE-2026-20122 solely,” the corporate warned in an replace to a February 25 advisory.

“The vulnerabilities which can be described within the different CVEs on this advisory aren’t identified to have been compromised. Cisco strongly recommends that prospects improve to a hard and fast software program launch to remediate these vulnerabilities.”

The high-severity arbitrary file overwrite vulnerability (CVE-2026-20122) can solely be exploited by distant attackers with legitimate read-only credentials with API entry, whereas the medium-severity data disclosure flaw (CVE-2026-20128) requires native attackers to have legitimate vmanage credentials on the focused methods.

Cisco added that these vulnerabilities have an effect on Catalyst SD-WAN Supervisor software program, no matter gadget configuration.

SD-WAN zero-days exploited since 2023

Final week, the corporate additionally disclosed {that a} crucial authentication bypass vulnerability (CVE-2026-20127) has been exploited in zero-day assaults since not less than 2023, enabling extremely subtle risk actors to compromise controllers and add malicious rogue friends to focused networks.

The rogue friends enable the attackers to insert legitimate-looking malicious units, enabling them to maneuver deeper into compromised networks.

After joint advisories by U.S. and U.Ok. authorities warning of the exploitation exercise, CISA issued Emergency Directive 26-03 requiring federal companies to stock Cisco SD-WAN methods, acquire forensic artifacts, guarantee exterior log storage, apply updates, and examine potential compromises tied to assaults focusing on CVE-2026-20127 and an older flaw tracked as CVE-2022-20775.

Extra just lately, on Wednesday, Cisco launched safety updates to patch two maximum-severity vulnerabilities in its Safe Firewall Administration Heart (FMC) software program.

These safety flaws, an authentication bypass flaw (tracked as CVE-2026-20079) and a distant code execution (RCE) vulnerability (CVE-2026-20131), could be exploited remotely by unauthenticated attackers to achieve root entry to the underlying working system and execute arbitrary Java code as root on unpatched units, respectively.

Malware is getting smarter. The Purple Report 2026 reveals how new threats use math to detect sandboxes and conceal in plain sight.

Obtain our evaluation of 1.1 million malicious samples to uncover the highest 10 strategies and see in case your safety stack is blinded.

A Titan collision might hyperlink Saturn’s tilt, its moon Hyperion and its rings

0


Two of Saturn’s satellites — its largest and considered one of its weirdest — might owe their present types and orbits to a two-moon pileup about 400 million years in the past.

A smashup between a doomed moon and the large moon Titan may have birthed the spongy-looking Hyperion, a research submitted February 9 to arXiv.org suggests. Ensuing chaos within the Saturn system may then have led to the formation of its rings.

The notion builds on a 2022 proposal by one other staff, which steered the existence of a former moon to resolve some long-standing mysteries concerning the Saturn system, together with its comparatively excessive tilt, its youthful rings and the orbital relationships of a few of its moons.

The primary clues got here from Saturn’s relationship with Neptune. For many years, planetary scientists assumed that Saturn and Neptune had what’s known as a spin-orbit resonance: Each Saturn’s spin axis and Neptune’s orbit across the solar appeared to wobble at nearly the identical charge. However information from NASA’s Cassini mission, which orbited Saturn from 2004 to 2017, confirmed that Saturn is barely out of sync with Neptune. Nonetheless, the wobble charges are shut sufficient to counsel the planets annulled their resonance relationship comparatively just lately in cosmic phrases — possibly a number of hundred million years in the past.

“That tells us there was some disruption within the outer Saturn system,” says planetary scientist Matija Ćuk of the SETI Institute in Mountain View, Calif.

Ćuk and his colleagues counsel this disruption got here in two elements. First, a doomed moon collided with Titan, altering Titan’s tug on Saturn’s spin axis sufficient to interrupt the resonance with Neptune and producing particles that would later coalesce to type Hyperion. That collision may also have left Titan on a extra excessive orbit — one which continued to slowly widen over the following couple of hundred million years. Titan’s orbital evolution ultimately may have gravitationally triggered a slow-motion practice wreck that led Saturn’s internal moons to collide and grind one another down, in the end spawning each the rings and a brand new crop of younger internal moons.

Beforehand, MIT planetary scientist Jack Knowledge and colleagues steered that the break in Saturn-Neptune relationship coincided with the formation of Saturn’s famed rings, which some planetary scientists suppose are a mere 150 million years previous. Knowledge’s staff proposed that an additional moon, dubbed Chrysalis, may have tugged on Saturn’s spin axis, breaking the resonance with Neptune, earlier than veering perilously near the planet and being shredded into the rings.

“Jack wished to tie these two collectively,” Ćuk says. “However I believed that the formation of Hyperion is a extra direct clue.” Primarily based on earlier work, Ćuk calculated that Hyperion should have settled into its present orbital association inside the final 400 million years, a timeframe similar to that of the presumed dissolution of Saturn and Neptune’s resonance.

Within the new work, Ćuk and his colleagues counsel that Saturn as soon as had an additional moon, which they dub proto-Hyperion, about 4 instances as large as Chrysalis. By laptop simulations of a collision between proto-Hyperion and Titan, the staff discovered that Titan survives, whereas a few of the collision particles accretes into the present-day Hyperion, a porous, egg-shaped physique that tumbles chaotically by means of house.

However with out Chrysalis, the rings should have had a special origin, Ćuk says. His staff means that Titan and Hyperion would possibly clarify this as nicely — if there have been extra lacking moons. They posit that Saturn might have initially had a number of internal moons extra large than these current as we speak. Titan’s altered orbit after the collision may, over a whole lot of hundreds of thousands of years, put it in resonance with a type of internal moons, tweaking that moon’s orbit till it collided with one other.

Knowledge doesn’t suppose Ćuk’s situation fairly works. For one factor, it could require the internal moons to all be youthful than a number of hundred million years previous. However Mimas, a type of moons, has sufficient craters to counsel it’s a lot older.

“Their arguments don’t invalidate our situation,” he says, including that the brand new proposal is “a really totally different, extra difficult situation.”

Ćuk thinks Mimas may nonetheless be younger — its craters might have fashioned comparatively rapidly within the chaotic internal Saturn system. Each planetary scientists agree that extra detailed simulations of the Saturn system are wanted to indicate which image is most believable.

“There may be a 3rd variant of instability that mixes mine and Jack’s,” Ćuk says — or one thing new.


PhysicEdit: Educating Picture Modifying Fashions to Respect Physics

0


Instruction-based picture modifying fashions are spectacular at following prompts. However when edits contain bodily interactions, they typically fail to respect real-world legal guidelines. Of their paper From Statics to Dynamics: Physics-Conscious Picture Modifying with Latent Transition Priors,” the authors introduce PhysicEdit, a framework that treats picture modifying as a bodily state transition somewhat than a static transformation between two photographs. This shift improves realism in physics-heavy situations.

AI Picture Technology Failures

You generate a room with a lamp and ask the mannequin to show it off. The lamp switches off, however the lighting within the room barely modifications. Shadows stay inconsistent. The instruction is adopted, however illumination physics is ignored.

Now insert a straw right into a glass of water. The straw seems within the glass however stays completely straight as a substitute of bending attributable to refraction. The edit appears right at first look, but it violates optical physics. These are precisely the failures PhysicEdit goals to repair.

AI Image Generation Failures - Straw in Water

Additionally Learn: Prime 7 AI Picture Turbines to Strive in 2026

The Drawback with Present Picture Modifying Fashions

Most instruction-based modifying fashions comply with an easy setup.

  • You present a supply picture.
  • You present an modifying instruction.
  • The mannequin generates a modified picture.

This works nicely for semantic edits like:

  • Change the shirt coloration to blue
  • Substitute the canine with a cat
  • Take away the chair

Nonetheless, this setup treats modifying as a static mapping between two photographs. It doesn’t mannequin the method that leads from the preliminary state to the ultimate state.

This turns into an issue in physics-heavy situations resembling:

  • Insert a straw right into a glass of water
  • Let the ball fall onto the cushion
  • Flip off the lamp
  • Freeze the soda can

These edits require understanding how bodily legal guidelines have an effect on the scene over time. With out modeling that transition, the system typically produces outcomes that look believable at first look however break below nearer inspection.

From Static Mapping to Bodily State Transitions

PhysicEdit proposes a special formulation.

As an alternative of straight predicting the ultimate picture from the supply picture and instruction, it treats the instruction as a bodily set off. The supply picture represents the preliminary bodily state of the scene. The ultimate picture represents the result after the scene evolves below bodily legal guidelines.

In different phrases, modifying is handled as a state evolution downside somewhat than a direct transformation.

This distinction issues.

Conventional modifying datasets solely present the beginning picture and the ultimate picture. The intermediate steps are lacking. In consequence, the mannequin learns what the output ought to appear to be, however not how the scene ought to bodily evolve to achieve that state.

PhysicEdit addresses this limitation by studying from movies.

Introducing PhysicTran38K

To coach a physics-aware modifying mannequin, the authors created a brand new dataset referred to as PhysicTran38K. It accommodates roughly 38,000 video-instruction pairs targeted particularly on bodily transitions. The dataset covers 5 main domains:

  • Mechanical
  • Optical
  • Organic
  • Materials
  • Thermal

Throughout these domains, it defines 16 sub-domains and 46 transition varieties. Examples embody:

  • Gentle reflection
  • Refraction
  • Deformation
  • Freezing
  • Melting
  • Germination
  • Hardening
  • Collapse
From Static Mapping to Physical State Transitions

Every video captures a full transition from an preliminary state to a ultimate state, together with the intermediate steps. The development course of is structured and filtered rigorously:

  • Movies are generated utilizing prompts that explicitly outline begin state, set off occasion, transition, and ultimate state.
  • Digicam movement is filtered out in order that pixel modifications replicate bodily evolution somewhat than viewpoint shifts.
  • Bodily ideas are routinely verified to make sure consistency.
  • Solely transitions that move these checks are retained.

This ends in high-quality supervision for studying life like bodily dynamics.

How PhysicEdit Works?

PhysicEdit builds on high of Qwen-Picture-Edit, a diffusion-based modifying spine. To include physics, it introduces a dual-thinking mechanism with two elements:

  1. Bodily grounded reasoning
  2. Implicit visible considering
Overview of the PhysicEdit framework

These two streams complement one another and deal with completely different facets of bodily realism.

Twin-Pondering: Reasoning and Visible Transition Priors

Bodily Grounded Reasoning

PhysicEdit makes use of a frozen Qwen2.5-VL-7B mannequin to generate structured reasoning earlier than picture technology begins.

Given the supply picture and instruction, it produces:

  • The bodily legal guidelines concerned
  • Constraints that have to be revered
  • An outline of how the change ought to unfold

This reasoning hint turns into a part of the conditioning context for the diffusion mannequin. It ensures the edit respects causality and area information.

The reasoning mannequin stays frozen throughout coaching, which helps protect its normal information.

Implicit Visible Pondering

Textual content reasoning alone can’t seize fine-grained visible results resembling:

  • Refined deformation
  • Texture transitions throughout melting
  • Gentle scattering

To deal with this, PhysicEdit introduces learnable transition queries.

These queries are educated utilizing intermediate frames from the PhysicTran38K movies. Two encoders supervise them:

  • DINOv2 options for structural data
  • VAE options for texture-level element

Throughout coaching, the mannequin aligns the transition queries with visible options extracted from intermediate states. At inference time, no intermediate frames can be found. As an alternative, the discovered transition queries act as distilled transition priors, guiding the mannequin towards bodily believable outputs.

Why Video Issues for Studying Physics?

With image-only supervision, the mannequin sees solely the preliminary and ultimate states. With video supervision, it sees how the scene evolves step-by-step. This extra data constrains the training course of. It teaches the mannequin not simply what the result ought to appear to be, however the way it ought to develop over time. PhysicEdit compresses this dynamic data into latent representations in order that modifying stays environment friendly and single-image primarily based throughout inference.

Outcomes on PICABench and KRISBench

PhysicEdit was evaluated on two benchmarks:

PICABench Outcomes

PICABench Results

PICABench focuses on bodily realism, together with optics, mechanics, and state transitions. In comparison with its spine mannequin, PhysicEdit improves total bodily realism by roughly 5.9%. The biggest features seem in classes requiring implicit dynamics, together with:

  • Gentle supply results
  • Deformation
  • Causality
  • Refraction

KRISBench Outcomes

KRISBench Results

On KRISBench, which evaluates knowledge-grounded modifying, PhysicEdit improves total efficiency by round 10.1%. Enhancements are significantly noticeable in:

  • Temporal notion
  • Pure science reasoning

These outcomes recommend that modeling modifying as state transitions improves each visible constancy and physics-related reasoning.

Why This Issues for AI Techniques?

As generative fashions develop into extra built-in into artistic instruments, augmented actuality techniques, and multimodal brokers, bodily plausibility turns into more and more necessary. Visually inconsistent lighting, unrealistic deformation, or damaged causality can scale back reliability and belief.

PhysicEdit demonstrates that:

  • Physics might be discovered successfully from video knowledge
  • Transition priors might be distilled into compact latent representations
  • Textual content reasoning and visible supervision can work collectively

This represents a significant step towards extra world-consistent generative fashions.

Our Prime Articles on Picture Modifying Fashions:

Conclusion

Most picture modifying fashions deal with modifying as a static transformation downside. PhysicEdit reframes it as a bodily state transition downside. By combining video-based supervision, bodily grounded reasoning, and discovered transition priors, it produces edits that aren’t solely semantically right however bodily believable. The dataset, code, and checkpoints are open-sourced, making it accessible for researchers and engineers who need to construct extra life like modifying techniques. As generative AI continues to evolve, incorporating bodily consistency could transfer from being a analysis innovation to a normal requirement.

Word: The supply of all the pictures and data within the weblog is that this analysis paper.

Good day, I’m Nitika, a tech-savvy Content material Creator and Marketer. Creativity and studying new issues come naturally to me. I’ve experience in creating result-driven content material methods. I’m nicely versed in web optimization Administration, Key phrase Operations, Net Content material Writing, Communication, Content material Technique, Modifying, and Writing.

Login to proceed studying and luxuriate in expert-curated content material.

Azure Migration Challenges (and tips on how to resolve them)

0


Shifting workloads to Azure isn’t plug-and-play. Listed here are some workarounds for challenges organizations encounter when planning and executing migrations.

  • Legacy OS & Software program Compatibility
    • Outdated, out-of-support working programs could not run in Azure or could carry out poorly.
    • Tightly coupled apps tied to particular {hardware} or OS variations are laborious to duplicate.
  • Repair: Run compatibility assessments early. Improve or patch the OS earlier than migrating, or refactor the workload to run on a supported OS.

 

  • Efficiency Sizing
    • On-prem VMs could depend on quick native SSDs or low-latency community hyperlinks you will not get by default in Azure.
    • Undersizing means poor efficiency; oversizing means wasted spend.
  • Repair: Use Azure Migrate’s performance-based suggestions to right-size your VMs.

 

  • Community & Id Integration
    • Migrated servers nonetheless want to speak with on-prem sources and authenticate customers.
    • Splitting app servers and auth servers throughout environments breaks issues quick.
  • Repair: Design community topology & identification infrastructure earlier than you progress something. Transfer workloads which have interdependencies collectively.

 

  • Governance & Cloud Sprawl
    • On-prem controls (naming conventions, gear tags) do not mechanically observe you to the cloud.
    • Spinning up sources with a click on results in sprawl.
  • Repair: Arrange Azure Coverage from day one. Implement tagging, naming, and compliance guidelines as a part of the migration mission—not after.

 

  • Abilities Gaps
    • On-prem server consultants aren’t mechanically fluent in Azure operations.
  • Repair: Put money into cloud operations coaching earlier than and in the course of the migration.

 

 

  • Compatibility
    • Not each database engine or model maps cleanly to an Azure equal.
  • Repair: Run the Azure Information Migration Assistant early to confirm characteristic and performance assist.

 

  • Publish-Migration Efficiency
    • Efficiency is dependent upon the internet hosting ecosystem; what labored on-prem could not translate instantly.
  • Repair: Revisit indexing and configuration after migration. Use SQL Clever Insights and Efficiency Suggestions for tuning steerage.

 

  • Selecting the Proper Service Tier
    • Azure affords elastic swimming pools, managed cases, Hyperscale, and sharding—choosing flawed could also be pricey.
  • Repair: Profile your workload along with your DBA and use Azure Migrate’s Database Evaluation for sizing recommendations.

 

  • Safety Configuration
    • Consumer logins, roles, and encryption settings should migrate with the information.
  • Repair: Map each layer of your on-prem safety configuration and implement corresponding controls post-migration.

 

  • Information Integrity
    • Information varieties, constraints, and triggers should come over intact with zero loss or corruption.
  • Repair: Use dependable migration instruments, take a look at a number of occasions, and validate row counts and key constraints. Plan cutover throughout low-usage home windows and all the time have a rollback plan.

 

 

  • Legacy App Complexity
    • Customized and legacy apps carry years of gathered config information, hard-coded paths, IP addresses, and environment-specific logging.
    • Every app can really feel like its personal mini migration mission.
  • Repair: Use Azure Migrate’s app dependency evaluation to map what every app wants earlier than you contact it.

 

  • Dependency Conflicts
    • Apps could rely on particular framework variations, libraries, or OS options that are not accessible or supported in Azure.
  • Repair: Establish and resolve dependency gaps early. Think about containerizing or refactoring apps to isolate them from setting variations.

 

  • Scale of Effort
    • Dozens or a whole bunch of apps, every with distinctive traits, create a large handbook workload.
  • Repair: Automate all the things you may. Use porting assistants and batch migration tooling to scale back repetitive duties.

 

Begin assessments early, automate aggressively, arrange governance from day one, and practice your group earlier than the transfer—not after. The most definitely explanation for a migration failure comes from skipping the prep work.

 

YuanLab AI Releases Yuan 3.0 Extremely: A Flagship Multimodal MoE Basis Mannequin, Constructed for Stronger Intelligence and Unmatched Effectivity


How can a trillion-parameter Giant Language Mannequin obtain state-of-the-art enterprise efficiency whereas concurrently reducing its complete parameter rely by 33.3% and boosting pre-training effectivity by 49%? Yuan Lab AI releases Yuan3.0 Extremely, an open-source Combination-of-Specialists (MoE) giant language mannequin that includes 1T complete parameters and 68.8B activated parameters. The mannequin structure is designed to optimize efficiency in enterprise-specific duties whereas sustaining aggressive general-purpose capabilities. In contrast to conventional dense fashions, Yuan3.0 Extremely makes use of sparsity to scale capability with no linear enhance in computational value.

Layer-Adaptive Knowledgeable Pruning (LAEP)

The first innovation in Yuan3.0 Extremely’s coaching is the Layer-Adaptive Knowledgeable Pruning (LAEP) algorithm. Whereas knowledgeable pruning is often utilized post-training, LAEP identifies and removes underutilized consultants immediately in the course of the pre-training stage.

Analysis into knowledgeable load distribution revealed two distinct phases throughout pre-training:

  1. Preliminary Transition Part: Characterised by excessive volatility in knowledgeable masses inherited from random initialization.
  2. Secure Part: Knowledgeable masses converge, and the relative rating of consultants based mostly on token project stays largely mounted.

As soon as the secure section is reached, LAEP applies pruning based mostly on two constraints:

  • Particular person Load Constraint (⍺): Targets consultants whose token load is considerably decrease than the layer common.
  • Cumulative Load Constraint (β): Identifies the subset of consultants contributing the least to complete token processing.

By making use of LAEP with β=0.1 and ranging ⍺, the mannequin was pruned from an preliminary 1.5T parameters all the way down to 1T parameters. This 33.3% discount in complete parameters preserved the mannequin’s multi-domain efficiency whereas considerably decreasing reminiscence necessities for deployment. Within the 1T configuration, the variety of consultants per layer was diminished from 64 to a most of 48 preserved consultants.

https://github.com/Yuan-lab-LLM/Yuan3.0-Extremely/blob/principal/Docs/Yuan3.0_Ultrapercent20Paper.pdf

{Hardware} Effectivity and Knowledgeable Rearrangement

MoE fashions typically undergo from device-level load imbalance when consultants are distributed throughout a computing cluster. To deal with this, Yuan3.0 Extremely implements an Knowledgeable Rearranging algorithm.

This algorithm ranks consultants by token load and makes use of a grasping technique to distribute them throughout GPUs in order that the cumulative token variance is minimized.

Methodology TFLOPS per GPU
Base Mannequin (1515B) 62.14
DeepSeek-V3 Aux Loss 80.82
Yuan3.0 Extremely (LAEP) 92.60

Complete pre-training effectivity improved by 49%. This enchancment is attributed to 2 elements:

  • Mannequin Pruning: Contributed 32.4% to the effectivity achieve.
  • Knowledgeable Rearrangement: Contributed 15.9% to the effectivity achieve.

Mitigating Overthinking with Revised RIRM

Within the reinforcement studying (RL) stage, the mannequin employs a refined Reflection Inhibition Reward Mechanism (RIRM) to stop excessively lengthy reasoning chains for easy duties.

The reward for reflection, $R_{ver}$, is calculated utilizing a threshold-based penalty system:

  • rmin=0: The best variety of reflection steps for direct responses.
  • rmax=3: The utmost tolerable reflection threshold.

For proper samples, the reward decreases as reflection steps method rmax, whereas incorrect samples that ‘overthink’ (exceeding rmax obtain most penalties. This mechanism resulted in a 16.33% achieve in coaching accuracy and a 14.38% discount in output token size.

https://github.com/Yuan-lab-LLM/Yuan3.0-Extremely/blob/principal/Docs/Yuan3.0_Ultrapercent20Paper.pdf

Enterprise Benchmark Efficiency

Yuan3.0 Extremely was evaluated in opposition to a number of trade fashions, together with GPT-5.2 and Gemini 3.1 Professional, throughout specialised enterprise benchmarks.

Benchmark Process Class Yuan3.0 Extremely Rating Main Competitor Rating
Docmatix Multimodal RAG 67.4% 48.4% (GPT-5.2)
ChatRAG Textual content Retrieval (Avg) 68.2% 53.6% (Kimi K2.5)
MMTab Desk Reasoning 62.3% 66.2% (Kimi K2.5)
SummEval Textual content Summarization 62.8% 49.9% (Claude Opus 4.6)
Spider 1.0 Textual content-to-SQL 83.9% 82.7% (Kimi K2.5)
BFCL V3 Software Invocation 67.8% 78.8% (Gemini 3.1 Professional)

The outcomes point out that Yuan3.0 Extremely achieves state-of-the-art accuracy in multimodal retrieval (Docmatix) and long-context retrieval (ChatRAG) whereas sustaining sturdy efficiency in structured knowledge processing and power calling.


Try the Paper and RepoAdditionally, be at liberty to observe us on Twitter and don’t overlook to affix our 120k+ ML SubReddit and Subscribe to our Publication. Wait! are you on telegram? now you possibly can be a part of us on telegram as properly.


Apple March occasion stay weblog: Each new product because it occurs

0


6 hours in the past, By Jason Cross

MacBook Neo: Here is what you DON’T get.

Positive, the MacBook Neo looks as if a steal at simply $599. And perhaps it’s! However Apple did have to chop some corners to get at that worth.

8 hours in the past, By Roman Loyola

iOS 26.3.1, macOS 26.3.1 launched

Apple has launched model 26.3.1 of its working programs. For those who’re planning on investing in a Studio Show, you’ll want to put in this replace, in accordance with the discharge notes. It additionally contains safety patches and bug fixes.

Apple

8 hours in the past, By Roman Loyola

Studio Show compatibility

Apple introduced the brand new Studio Show and Studio Show XDR on Monday, and maybe you observed the compatibility record for the shows. We obtained a bit extra clarification about this.

On the macOS Tahoe 26.3.1 requirement: The Studio Show and Studio Show XDR have deep integration with macOS, thus the model requirement.

On the dearth of Intel compatibility: The shows require GPUs that may help the Show Stream Compression (DSC) and pixel charges required.

Macs that don’t meet the necessities could “gentle” the shows and run with decreased capabilities.

Atone for the announcement of the Studio Show and the Studio Show XDR.

Apple Studio Display 2026

Michael Simon/Foundry

10 hours in the past, By Roman Loyola

Two Neo restrict

There appears to be a restrict as to what number of MacBook Neos you should buy. Proper now, clients are restricted to 2 Neos. It’s not clear if this limitation will ultimately be lifted.

MacBook Neo customer limit

Foundry

Additionally, Macworld’s Michael Simon, who’s on the occasion in New York Metropolis, had a turkey and avocado sandwich for lunch. “It was good,” Simon stories.

11 hours in the past, By David Value

Audio high quality: Surprisingly good

Again to the MacBook Neo, and our preliminary verdict on these uncommon side-firing audio system (of which there are solely two, in comparison with 4 on the MacBook Air and 6 on the MacBook Professional) is that they sound higher than you may count on.

“The audio system sound surprisingly first rate,” stories Michael Simon. “It’s arduous to guage bass within the room I’m in, however i might really feel the vibration within the chassis.”

For those who’re questioning about methodology, Michael notes that the check was primarily based on a Justin Bieber tune.

11 hours in the past, By David Value

Summarising the brand new MacBook Air and Professional

For those who missed yesterday’s bulletins, right here’s Jason Cross summarising the brand new M5 MacBook Air and Professional fashions (and the brand new Studio Shows, for that matter) in simply digestible video kind.

11 hours in the past, By David Value

Okay, so what’s improper with it?

When the corporate is charging simply $599 it might be ridiculous to count on the MacBook Neo to be good. In fact there are compromises within the design, function record, and specs record.

So let’s summarise all of the methods through which Apple has lower corners to get that worth down.

  • Processor: The Neo comes with a cellphone processor, the A18 Professional, with 6 CPU cores and 5 GPU cores. The Air and Professional include the M5 processor and as much as 10 cores for each CPU and GPU.
  • RAM: Simply 8GB, with no choice to extend this. The MacBooks Air and Professional each include 16GB/24GB/32GB.
  • Storage: Both 256GB or 512GB. The Air goes from 512GB as much as 4TB, and the Professional from 1TB to 4TB.
  • Show: At 13.0 inches, the Neo’s display screen is smaller than that of the smaller Air (13.6 inches) and smaller Professional (14.2 inches). It additionally doesn’t get True Tone, and is sRGB versus P3.
  • Digital camera: You simply get a 1080p digital camera, in comparison with the 12MP mannequin on the opposite MacBooks. And there’s no Middle Stage.
  • Audio: There are solely two audio system. The Air will get 4, and the Professional six. And there are solely two mics, whereas the opposite MacBooks get three every. Lastly, the Neo doesn’t help high-impedance headphones.
  • Keyboard: Not backlit.
  • Trackpad: Multi-Contact, not Pressure Contact.
  • Ports: Two USB-C ports. One helps USB 3, whereas the opposite is proscribed to USB 2 speeds. There’s no MagSafe so it’s important to quit a USB port for charging.
  • Contact ID: Provided that you pay for the additional storage.
  • Battery: Estimated battery life (as much as 16 hours of video streaming) is decrease than on the Air (18 hours) and Professional (24 hours). It is because the Neo solely will get a 36.5Wh battery, in comparison with the 53.8Wh and 72.4Wh batteries on the smaller Air and Professional.
  • Charging: Solely comes with a 20W charger. The Air will get 40W and the Professional 70W. The Neo isn’t able to fast-charging.

12 hours in the past, By David Value

Apple’s gonna promote a ton of those

Michael Simon has been spending time getting hands-on with the MacBook Neo, and the most important factor that’s struck him is how premium the machine feels. There’s nothing about it, he says, that feels prefer it ought to value something lower than $1,000.

12 hours in the past, By David Value

There is no such thing as a spoon

When the Neo’s identify leaked yesterday, it sounded a bit like a joke. Would Keanu Reeves be within the viewers as we speak, we questioned. Would Apple’s goal market of younger clients even keep in mind The Matrix?

Effectively, we’re not laughing now. Apple presumably desires the identify to face just for newness, however the Matrix reference stands up surprisingly properly, as a result of the brand new finances MacBook makes probably the most jaded of tech cynics rethink what’s actual and what’s imaginary. As Michael Simon explains, it confirmed him every little thing he is aware of is a lie.

12 hours in the past, By David Value

Exterior monitor help

The Neo’s show is considerably on the small facet, however you may all the time join an exterior monitor utilizing one of many USB-C ports. (It has one USB-C 3 and one USB-C 2.)

Apple is eager to advertise this idea, and on the occasion the corporate arrange the Neo with the brand new Studio Show…

MacBook Neo with Studio Display

Michael Simon / Foundry

…and the brand new Studio Show XDR! That’ll be $599 for the Neo and $3,299 for the show, please.

MacBook Neo with Apple Studio Display XDR

Michael Simon / Foundry

13 hours in the past, By David Value

The Neo’s thriller “slots”

A lot debate within the Macworld workplace in regards to the mysterious slots on both facet of the MacBook Neo’s chassis, one in every of which sits subsequent to the headphone jack. At first look we thought we may be taking a look at an SDXC card slot, however on nearer inspection you may see the mesh that offers the sport away. They’re audio system.

MacBook Neo side speaker grill

Michael Simon / Foundry

It’s an uncommon design; the MacBook Air and Professional don’t have something comparable. The Professional’s audio system face upwards both facet of the keyboard, whereas the Air has speaker grilles within the hinge.

13 hours in the past, By David Value

A MacBook for $599? Whoa

Priced at $599/£599, the MacBook Neo is the lowest-priced laptop computer Apple gives, by far. It’s an incredible $500/£500 decrease than the MacBook Air, following that machine’s (debatable) worth rise yesterday.

Roman Loyola has all of the key information and figures from Apple’s gorgeous MacBook Neo launch.

13 hours in the past, By David Value

Are you able to handle with much less battery life?

It’s to be anticipated that the MacBook Neo, beginning at simply $599, may have quite a few limitations in comparison with the MacBooks Air and Professional. However right here’s one that may damage, provided that it’s being offered as a useful companion for college kids on the go: it has considerably much less battery life.

Apple estimates the Neo’s battery life as as much as 11 hours of wi-fi internet searching, or 16 hours of video streaming. These numbers evaluate to fifteen and 18 hours respectively for the 13-inch MacBook Air, and 16 and 24 for the smaller MacBook Professional.

13 hours in the past, By David Value

Simply 8GB of RAM

The spec sheet lists 8GB of RAM within the MacBook Neo, however doesn’t provide any extra element than that. You may’t configure issues like RAM and the processor when buying; the Neo solely has two configs, as talked about beneath.

For comparability, the M5 MacBook Air introduced yesterday comes with a baseline 16GB of RAM, and may be configured as much as 24GB or 32GB.

13 hours in the past, By David Value

Pricing in depth

We’ve talked about that the MacBook Neo begins at $599. However we will now give a bit extra element than that.

The worth you pay is partly depending on how a lot storage you need. However in an attention-grabbing twist, the dearer configuration additionally beneficial properties entry to Contact ID.

  • MacBook Neo (256GB, Magic Keyboard): $599
  • MacBook Neo (512GB, Magic Keyboard with Contact ID): $699

For those who’re questioning why Apple would provide solely two configurations (or SKUs), keep in mind that this kind of simplicity could make a tool cheaper to fabricate at scale.

Pre-orders are open now, by the way in which.

13 hours in the past, By David Value

Enjoyable with fonts

Returning to that odd font. Right here’s how Apple renders the identify of the MacBook Neo on its web site. It’s a chunkier, softer, thicker typography than we’re used to seeing from Apple. What do you suppose?

(A font skilled informed us that it appears to be like to be a rounded model of Apple’s normal San Francisco typeface, however stretched, and that is supported by underlying code on the web site.)

MacBook Neo font

Apple

14 hours in the past, By David Value

Port scenario replace

This can be necessary. The MacBook Neo has no ports on the proper facet. None. Zero.

14 hours in the past, By David Value

The place are our vivid colours?

There was a idea, forward of the announcement, that the brand new MacBook would get coloration finishes to match the yellow, inexperienced, and blue within the invitation paintings (see above). Evidently, we didn’t get any of them.

The Citrus is yellow, however doesn’t look any factor just like the paintings when considered in actual life (though it appears to be like extra vibrant in Apple’s product artwork, beneath). There’s no inexperienced choice, and indigo is, properly, indigo–roughly midway between blue and purple.

MacBook Neo

Apple

14 hours in the past, By David Value

A more in-depth look

Let’s look extra intently at a MacBook Neo within the new Citrus coloration end. It’s so much subtler than that identify would counsel.

MacBook Neo 5

Michael Simon / Foundry

MacBook Neo 6

Michael Simon / Foundry

MacBook Neo 7
That is the silver end.

Michael Simon / Foundry

14 hours in the past, By David Value

First impressions within the hand

The Neo looks like a smaller, barely thicker MacBook Air. Its Magic Keyboard and trackpad are nice. Tremendous gentle and looks like a $999 MacBook.

A superb first impression.

14 hours in the past, By David Value

MacBook Neo in footage

First reactions? It appears to be like good: traditional and fairly enjoyable on the identical time. The colours aren’t tremendous vivid, however they mirror the sunshine very properly and the matching keys are delicate and really feel excessive finish. Citrus is a daring alternative, even when it isn’t tremendous vibrant. Blush looks like one other Apple pink that isn’t very pink.

MacBook Neo 1

Michael Simon / Foundry

MacBook Neo 2

Michael Simon / Foundry

MacBook Neo 3

Michael Simon / Foundry

MacBook Neo 4

Michael Simon / Foundry

14 hours in the past, By David Value

It is really low-cost!

Apple is taking part in up the continuity options: Handoff and so forth. It’s clearly nonetheless eager for us to purchase different Apple merchandise to go together with the Neo.

And the branding is written in a extremely bizarre font within the advertising and marketing. We’ll get a shot of that as quickly as we will.

Ah, right here’s the worth. It begins at $599! That’s a superb $100 decrease than the underside finish of what we anticipated, and doesn’t sound like a foul deal in any respect.

14 hours in the past, By David Value

MacBook Neo confirmed!

It wasn’t misinformation in any case. The leak was correct; Apple is launching a brand new laptop computer product referred to as the MacBook Neo. It’s completely new from the bottom up. Some particulars:

  • 13-inch show
  • No notch
  • White keyboard
  • Mannequin with Contact ID (not normal)
  • 2 USB-C plus headphone jack
  • 1080p digital camera
  • Facet-firing audio system with Spatial Audio
  • 4 colours: silver, indigo, blush, citrus
  • A18 Professional
  • Keys match the colour
  • 16-hour battery

14 hours in the past, By David Value

The presentation begins!

Trudy from the Apple product group is main the presentation. And right here’s John Ternus, Apple’s chip boss.

We’re not completed but, he says. Practically half of Mac consumers are nonetheless new to the platform. Life is simply higher whenever you’re utilizing a Mac (some Home windows followers may disagree with that!). And he’s able to announce…

Apple March 2026 special experience 11

Michael Simon / Foundry

14 hours in the past, By David Value

Musical selections

They’re taking part in Aperture by Harry Kinds to begin us off. Learn into that what you’ll.

Apple March 2026 special experience 10

Michael Simon / Foundry

14 hours in the past, By David Value

It is virtually time!

We’ve moved via to the primary presentation space, which has a video display screen. No Keanu Reeves as but; the one celeb we’ve noticed to date is Marques Brownlee.

Extra images:

Apple March 2026 special experience 8

Michael Simon / Foundry

Apple March 2026 special experience 9

Michael Simon / Foundry

15 hours in the past, By David Value

Time for some breakfast?

Michael Simon is gathered with different journalists and influencers within the foyer of the Terminal Warehouse constructing in Chelsea, NY, ready to enter one other area. There’s a ramification of sunshine breakfast objects, water, iced espresso, and juice drinks.

Apple March 2026 special experience 4

Michael Simon / Foundry

Apple March 2026 special experience 5

Michael Simon / Foundry

Apple March 2026 special experience 6

Michael Simon / Foundry

Apple March 2026 special experience 7

Michael Simon / Foundry

15 hours in the past, By David Value

Apple’s particular expertise in footage

With lower than half an hour to go, Macworld’s Michael Simon is at Apple’s particular expertise in New York. Need to see what it appears to be like like?

Apple March 2026 special experience 1

Michael Simon / Foundry

Apple March 2026 special experience 1

Michael Simon / Foundry

Apple March 2026 special experience 3

Michael Simon / Foundry

15 hours in the past, By David Value

What the hell is an excellent core?

Yesterday’s M5 Professional and M5 Max bulletins got here with some new buzzwords, and Jason Cross is every kind of triggered with Apple’s newest branding stunt.

15 hours in the past, By David Value

Right now’s timings

When can we count on to listen to about as we speak’s new merchandise? It might be very quickly certainly.

For the previous two days Apple has dropped press releases at 9am ET / 6am PT (which interprets to 2pm GMT). That’s simply over an hour from now.

The slight complicating issue is that as we speak, not like on Monday and Tuesday, there’s an precise in-person occasion… sorry, “expertise.” That takes place at precisely the identical time we count on the press releases: 9am ET / 6am PT / 2pm GMT. Apple will in all probability announce the information to the on-site journalists and the world on the identical time, earlier than the fortunate individuals (together with Macworld representatives) transfer on to hands-on periods with the brand new merchandise. Count on loads of information, images, and skilled evaluation from our reporters within the subject.

The opposite time to set an alarm for is 9.15am ET / 6.15am PT / 2.15pm GMT. That’s when pre-orders for all of this week’s new Apple merchandise will kick off.

16 hours in the past, By David Value

Welcome to day 3!

Welcome and good morning to our U.S. and Canadian readers! It’s day three of Apple’s week of product bulletins and we’re hoping for one thing fairly particular as we speak. The good cash is on the MacBook Neo, and there’s an out of doors probability we’ll see an A19 refresh of the baseline iPad.

16 hours in the past, By David Value

A worth lower, or a worth rise?

Apple’s choice to extend the MacBook Air’s beginning storage, which we discovered about yesterday, goes to be divisive. Not as a result of having extra storage is a foul factor–clearly it’s not–however as a result of it means the beginning worth has gone up too.

It might be argued that this was technically a worth lower, as a result of the 512GB SSD choice now prices $100 lower than earlier than. However in actual phrases it felt like a worth rise, as a result of the 256GB choice isn’t obtainable any extra; as a substitute of beginning at $999, the MacBook Air begins at $1,099.

Over on TechRadar, Lance Ulanoff argues that the change isn’t merely constructive however “great information,” reasoning that 512GB is now the naked minimal he wants. Some will agree, however I think that the Air has many admirers who don’t want a lot storage in any respect, or rely on the cloud.

For individuals who missed it, take a look at Roman Loyola’s detailed clarification of the brand new pricing.

17 hours in the past, By David Value

M4 iPad Air efficiency benchmarks

The iPad Air which Apple introduced on Monday comes with an M4 processor. However don’t make the error of pondering it’ll carry out the identical as final 12 months’s M4 iPad Professional. As Apple acknowledges on its comparability web page, the Professional mannequin will get an additional GPU core and as much as two extra CPU cores.

iPad Air and iPad Pro M4 specs

Apple

How it will have an effect on efficiency has grow to be clearer with the emergence of Geekbench scores for the brand new iPad Air. Noticed by MacRumors, the numbers present the Air M4 performing virtually identically to the iPad Professional M4 within the single-core check (3,714 vs 3,691) however noticeably worse within the multi-core phase (12,296 vs 13,663). This shouldn’t be noticeable in real-world use, however provides an thought of the margins between merchandise that appear at first look to have the identical tech specs.

19 hours in the past, By David Value

Expectations for as we speak

As a fast recap, we’ve already heard about six new Apple merchandise this week:

  • iPhone 17e
  • M4 iPad Air
  • M5 MacBook Air
  • M5 Professional/Max MacBook Professional
  • Studio Show
  • Studio Show XDR

Right now is prone to be the final day of bulletins in the interim, culminating within the “particular experiences” (learn: hands-on periods with all the brand new equipment) in New York, London, and Shanghai at 9am ET. So what has Apple obtained in retailer for us?

The apparent product to complete on is the finances MacBook, which Apple appeared to leak by accident with a regulatory submitting yesterday. Based mostly on that gaffe it appears to be like like this machine will referred to as the MacBook Neo.

It’s additionally doable we’ll get the anticipated replace to the baseline iPad, probably that includes an A19 processor. However the logical time to disclose that product was on Monday, alongside the brand new iPad Air. Maybe Apple has determined to carry again the A19 iPad till later within the 12 months.

19 hours in the past, By David Value

Good morning!

Welcome to our European readers (plus any insomniacs within the U.S.) as we head into day three of Apple’s massive week of product bulletins. It’s simply coming as much as 9am right here in London, and we’ll be with you all through the day and throughout the timezones, bringing you all of the information because it occurs.

1 day in the past, By Michael Simon

PSA: Preorders start tomorrow morning

For those who’re itching to purchase one of many merchandise Apple launched this week—the iPhone 17e, M4 iPad Air, M5 MacBook Air, M5 Professional/Max MacBook Professional, or Studio Show—preorders start at 9:15 am ET on Wednesday. It’s a wierd time, however I assume it’s associated to the “Expertise” occasion that kicks off at 9 am. We additionally count on not less than yet one more product to be obtainable for preorder: the brand new low-cost MacBook.

1 day in the past, By Michael Simon

Yet another sleepless night time!

There’s in the future left till Apple’s “Expertise” occasion in New York, London, and Shanghai. We count on to get our first take a look at Apple’s low-cost MacBook (presumably named Neo). We’ll be on web site in New York to doc the entire occasion so bookmark us and are available again tomorrow at 9am!

1 day in the past, By Michael Simon

Studio Shows lower off Intel Macs

Apple’s new Studio Shows draw a line within the sand between Intel and Apple silicon as soon as and for all. So if you’d like one, you’ll want not less than an M1 Mac. And if you wish to use ProMotion on the XDR show, you’ll want not less than a Mac with an M4 or M2 Professional/Max chip. It gained’t be lengthy now till the Intel period is totally previously.

1 day in the past, By Michael Simon

Take a look at this retro reference within the M5 MacBook Professional advert

Apple has launched an advert for its new M5 Professional and Max MacBook Professionals, and in case you take a look at the left facet of the display screen on the 1:25 minute mark, you’ll see an authentic Bondi Blue iMac. The remainder of the advert is fairly good too.

1 day in the past, By Michael Simon

OS 26.3.1 replace coming quickly

As anticipated, Apple will launch an iOS/iPadOS/macOS 23.3.1 replace to help the entire new merchandise launched this week. Apple’s Studio Show tech specs web page lists macOS Tahoe 26.3.1 and iPadOS 26.3.1 as a requirement for operating the brand new shows. The replace will in all probability arrive early subsequent week, forward of the brand new merchandise arriving on March 11.

1 day in the past, By Michael Simon

Extra particulars in regards to the MacBook Neo

Macrumors has dug into the leaked regulatory submitting to get some further particulars in regards to the new low-cost MacBook Neo. In response to the submitting, it’s going to have two USB-C ports, MagSafe for charging, and Wi-Fi 7, although the location claims it is not going to have the N1 chip within the iPhone, iPad Air, MacBook Air, and MacBook Professional. That’s odd.

1 day in the past, By Michael Simon

Nonetheless ready for the iPad

Apple has already unveiled six new merchandise this week:

  • iPhone 17e
  • M4 iPad Air
  • M5 MacBook Air
  • M5 Professional/Max MacBook Professional
  • Studio Show
  • Studio Show XDR

We absolutely count on tomorrow to deliver a budget MacBook, however what in regards to the iPad? Is Apple holding it again for tomorrow for some motive associated to the MacBook? 🤔

iPad A16 kids drawing

The place are you, iPad?

Foundry

1 day in the past, By Michael Simon

The ‘low-cost’ MacBook comes into focus

Apple shuffled its complete MacBook pricing construction for 2 causes: to assist offset the rising value of RAM and to make room for what we now suppose can be referred to as MacBook Neo. Right here’s what the brand new technique means for a budget MacBook coming tomorrow.

2 days in the past, By Michael Simon

New MacBook pricing is difficult

Apple has elevated costs throughout the board for the brand new M5 MacBooks, however it additionally lowered them? Wait, what? Yeah it’s complicated–we clarify all of it right here.

2 days in the past, By Michael Simon

The M5 MacBook caught with Wi-Fi 6E

Whereas Apple has introduced Wi-Fi 7 to the iPhone 17, iPhone 17 Professional, iPhone 17 Professional Max, iPhone Air, M5 iPad Professional, M4 iPad Air, M5 MacBook Air, and M5 Professional/Mac MacBook Professional, the M5 MacBook Professional nonetheless has Wi-Fi 6E due to its use of an older Broadcom wi-fi chip. It additionally misses out on Bluetooth 6 and Thread help. Bummer.

2 days in the past, By Michael Simon

Apple’s new Professional monitor would not have a horrible worth

Alongside a easy replace to the Studio Show that brings a greater digital camera and Thunderbolt 5, Apple has unveiled a brand new Studio Show XDR show for professionals that’s considerably cheaper than earlier than—and comes with a stand!

2 days in the past, By Michael Simon

MacBook ‘Neo’ incoming?

As noticed by Macrumors, Apple has filed a regulatory doc with the EU for a laptop computer with the identify MacBook Neo. May Apple be introducing a model new laptop computer line tomorrow, impressed by Keanu Reeves’ ‘Matrix’ character?

2 days in the past, By Michael Simon

The MacBook Professional was well worth the wait

Appe’s new high-end chips are an enormous leap over the M4 fashions, providing larger customization, larger beginning storage, and a strong new structure constructed for professional customers. Learn all about them earlier than you may preorder them tomorrow.

2 days in the past, By Michael Simon

A completely loaded MacBook Professional is definitely a bit cheaper

For those who’re trying to get probably the most MacBook doable—16-inch M5 Max MacBook Professional with a 40-core GPU, 128GB of RAM, 8TB of storage, and a nano-texture show, it’ll run you $7,349. That’s really rather less than the identical M4, which value $7,549 beforehand. A cut price!

2 days in the past, By Michael Simon

Apple introduces new Fusion Structure for M5 chip

The M5 Professional and M5 Max MacBook Professional chips deliver a brand new Fusion Structure that Apple describes as a design that “combines two dies right into a single system on a chip (SoC), which features a highly effective CPU, scalable GPU, Media Engine, unified reminiscence controller, Neural Engine, and Thunderbolt 5 capabilities.” As well as, they comprise the “world’s quickest CPU core” and “considerably enhance efficiency by as much as 30 p.c for professional workloads.”

2 days in the past, By Michael Simon

M5 MacBook Professionals now begin at 1TB

Apple eradicated the 512GB variations of the M5 MacBook Professional as we speak, elevating the beginning storage to 1TB and costs by $100-$200 throughout the board. So that you’re paying a bit extra however getting much more storage. For instance, the 14-inch M4 Professional MacBook Professional now begins at $2,199 ($200 larger than earlier than) however you get 1TB of storage as a substitute of 512GB (which value $2,399 earlier than).

2 days in the past, By Michael Simon

The M5 MacBook Air prices $100 extra

Apple’s new MacBook Air has an M5 chip, N1 wi-fi chip with Wi-Fi 7, and double the beginning storage (512GB) with quicker SSD expertise, however it begins at $1,099 now. That’s $100 greater than the M4 with 256GB of storage however $100 lower than the M4 with 512GB of storage. So… yay?

2 days in the past, By Michael Simon

$3,299 Studio Show XDR replaces $4,999 Professional Show

Apple unveiled two new 27-inch shows as we speak, a brand new Studio Show for $1,599 that brings a 12MP Middle Stage digital camera and Thunderbolt 5 ports, and a model new Studio Show XDR, with a sophisticated mini-LED backlight with 2,000 nits of peak HDR brightness, and 120Hz ProMotion for $3,299. It replaces the 32-inch Professional Show XDR.

Apple Studio Display and Studio Display XDR

Apple

2 days in the past, By Michael Simon

New Macs are right here!

Apple has unveiled the M5 MacBook Air, and M5 Professional/M5 Max MacBook Professional, together with a new Studio Show and Professional Show XDR. Full protection to comply with so keep tuned!

2 days in the past, By Michael Simon

25 minutes to go!

Apple launched the iPad Air and iPhone 17e at 9am ET yesterday, and we’re anticipating thenew issues to launch on the identical time as we speak, so prepare for one thing new in simply half an hour. M5 MacBook Air? M5 Professional and M5 Mac chips? iPad with Apple Intelligence? Keep tuned!

2 days in the past, By David Value

Colour me disillusioned

The iPhone 17e has quite a few limitations, together with slower MagSafe charging speeds and a binned A19 processor. However the largest disappointment for some analysts is the weakest pink within the historical past of tech. Check out Smooth Pink:

Recent pink iPhones: iPhone 15 Plus, iPhone 16 Plus, iPhone 17e

Apple

2 days in the past, By Michael Simon

Extra bulletins on the way in which!

Hey everybody, it’s Tuesday, and we’re anticipating Apple to announce its second spherical of recent merchandise at 9am ET, about two hours from now. New MacBooks will probably make an look as we speak, together with the entry-level iPad. Prepare!

2 days in the past, By Michael Simon

The brand new iPad Air is not actually new

The brand new iPad Air web site regarded acquainted, so I went to the Web Archive to have a look at the M3 web site, and positive sufficient, little or no has modified. If Apple doesn’t care sufficient to replace its web site, why ought to we care about shopping for one?

2 days in the past, By Michael Simon

Apple adjusts trade-in values

For those who’re planning to improve your iPhone, iPad, or Mac this week with one of many new fashions, you’re going to get rather less on your trade-in via Apple. Most are only a few {dollars}, however you’ll get about $150 much less in case you’re buying and selling in a 16-inch MacBook Professional. You may take a look at the worth on your particular mannequin on Apple’s Commerce In web page.

2 days in the past, By Michael Simon

Watch us break down the brand new merchandise

Earlier than we head off for the night time, take a look at our roundup video of the M4 iPad Air and iPhone 17e. And we’ll see you all within the morning for spherical 2!

2 days in the past, By Michael Simon

iPhone 17e video exhibits off new options

Apple has posted a minute-long iPhone 17e video that takes a unique method than its ordinary promoting. It includes a voiceover operating via the brand new options in a extremely edited, scatterhot vogue. Watch it beneath:

2 days in the past, By Michael Simon

OS 26.4 beta intrigue

This might be only a quirk unrelated to this week’s bulletins, however Apple launched the third OS 26.4 beta updates to builders as we speak for iOS, iPad OS, watchOS, tvOS, and visionOS—every little thing however macOS. May that sign that new Macs are on the way in which on Tuesday, and Apple remains to be engaged on it? We’ll have to attend till tomorrow to see.

2 days in the past, By Michael Simon

New iPhone case and strap colours!

Together with the iPhone 17e, Apple has introduced a brand new set of colours for its equipment. The iPhone 17, 17 Professional, and 17 Professional Max silicone instances now are available in Brilliant Guava (pink) and Vanilla (off-white), whereas the iPhone 17 silicone case will get an Electrical Lavender (purple) coloration as properly. The iPhone 17e silicone instances additionally are available in Brilliant Guava and Vanilla, together with a brand new Smooth Pink choice to match the brand new cellphone. The identical coloration choices lengthen to Apple Watch bands and Crossbody straps.

iphone 17e case in soft pink; iphone 17 case in vanilla; iphone 17 pro case in bright guava

Apple

2 days in the past, By Michael Simon

Deal: M3 iPad Air for $150 off

For those who’re searching for a brand new iPad Air, you may save $150 on the 5G model of the 11-inch M3 mannequin proper now. That’s mainly getting all-over wi-fi connectivity free of charge, and also you’re not lacking out on a lot in comparison with the brand new mannequin. We don’t count on this deal to final very lengthy (non-5G variants are already offered out), so go seize it if you’d like one.

2 days in the past, By Jason Cross

iPhone 17e: It is not thrilling however it’s nonetheless a superb deal

The iPhone 17e might not be an enormous improve over final 12 months’s 16e, however for many customers it nonetheless delivers a superb expertise and worth at a worth that’s 25% lower than the iPhone 17.

3 days in the past, By Michael Simon

Tons extra deliberate for this week

Apple introduced the brand new iPad Air and iPhone 17e this morning, however that’s simply the beginning of a really busy week. Nonetheless to return:

Plus, we might get a brand new model of iOS 26.3 to help the brand new merchandise. We’ll have heaps extra protection via as we speak and the remainder of this week, so hold this web page open in your browser!

3 days in the past, By Michael Simon

iPad Air: Heigh‐Ho, ho-hum

There’s not a lot to say in regards to the new iPad Air, however it’s nonetheless a terrific pill at a terrific worth.

3 days in the past, By Michael Simon

iPhone 17e: Extra MagSafe disappointment

David Value digs into the iPhone 17e announcement to separate the great (MagSafe, storage pricing) from the dangerous (MagSafe speeds, binned A19 chip).

iPhone 17e MagSafe

Apple

3 days in the past, By Michael Simon

The place’s the iPad?

Apple up to date the iPad Air as we speak however didn’t launch a brand new base-level iPad, which can be anticipated to get a chip refresh this week. It looks as if the 2 fashions go hand-in-hand, so perhaps Apple has one thing greater for the entry-level mannequin?

3 days in the past, By Michael Simon

Even Apple cannot discover something new in regards to the iPad Air

Apple’s tagline for the iPad Air is, “Now supercharged by M4,” which is identical as earlier than, when it was “Now supercharged by M3.”

3 days in the past, By Michael Simon

Odd preorder time

The brand new iPad Air and iPhone 17e can be obtainable for preorder starting Wednesday, March 4 at 9:15am ET. That differs from the standard timing, which is historically on a Friday at 8am ET, however might be to coincide with Apple’s “Expertise” occasion, which begins a 9am ET on March 4.

3 days in the past, By Michael Simon

iPhone 17e tidbits

A number of issues I’ve observed in regards to the iPhone 17e:

  • It has a 4-core GPU in comparison with the iPhone 17’s 5 cores
  • It’s obtained a Ceramic Protect 2 entrance with 3x higher scratch resistance
  • Similar 26-hour battery life
  • It has slower 15W MagSafe charging
  • It doesn’t have the N1 chip

3 days in the past, By Michael Simon

No new colours for the iPad Air

Apple’s iPad Air is obtainable in the identical 4 colours because the earlier two fashions: blue, purple, starlight, and area grey. Sorry, iPad Air followers.

iPad Air in blue

That’s an image of the M3 iPad Air, however you’d by no means know.

Apple

3 days in the past, By Michael Simon

iPhone 17e is right here too!

Apple’s new entry-level iPhone has the C1X modem, A19 processor, MagSafe, and double the beginning storage (256GB). It additionally now is available in a brand new gentle pink coloration along with black and white. Nevertheless it nonetheless has a notch. It begins on the identical worth, $599.

iPhone 17e in black, white, and pink

Apple

3 days in the past, By Michael Simon

The M4 iPad Air has arrived!

Apple has introduced the brand new iPad Air with an M4 chip, N1 and C1X modem for 5G and WiFi 7, and 12GB of RAM for a similar $599 and $799 beginning costs.

3 days in the past, By Michael Simon

half-hour away!

The primary spherical of recent merchandise ought to arrive on the prime of the hour on Apple.com and the Apple Newsroom. iPhone 17e? iPad Air? MacBooks!? We’ll know quickly.

3 days in the past, By Michael Simon

First merchandise anticipated at 9 am ET

Based mostly on earlier bulletins, we count on Apple to unveil the primary of its new merchandise on Apple.com at 9 am ET (6 am PT/2 pm GMT). So keep tuned!

3 days in the past, By Michael Simon

A MacBook underneath $599?

Bloomberg’s Mark Gurman stories that Apple believes its new MacBook is “unbelievable worth” and that it’ll “drive a severe variety of switchers from Home windows machines and Chromebooks.” This might simply be Apple’s personal hype machine at full tilt, however it might additionally sign that the brand new MacBook can be cheaper than we expect. May $499 be doable?

3 days in the past, By Michael Simon

Apple Shops prepared for giant crowds

In response to Bloomberg’s Mark Gurman, Apple expects a “main rush” of shoppers at its shops following this week’s bulletins. I assume that’s as a result of everybody will need to take a look at the brand new low-cost MacBook, as a result of I doubt folks can be dashing to get a take a look at the M4 chip within the iPad Air.

3 days in the past, By Michael Simon

Apple March ‘expertise’ occasion: What to anticipate

Whereas Apple is promising per week of bulletins, that basically simply means Monday, Tuesday, and Wednesday. However these three days can be full of a lot of new merchandise. We’re anticipating quite a few updates to present gadgets, together with:

3 days in the past, By Michael Simon

Welcome to the Apple expertise stay weblog!

Beginning this morning, Apple will unveil a wide range of new merchandise over the course of the following three days. We count on merchandise to reach at 9 am ET, so stick round for the information, come again for the evaluation of what Apple releases, and take a look at our social media websites (Fb, Threads, and Bluesky) to affix the dialog.

GLP-1 diabetes medicines decrease danger of all types of substance use problems, research finds

0


Individuals who took GLP-1 medicine had decrease danger of all types of drug and alcohol habit

A big epidemiological research of greater than 600,000 veterans with diabetes suggests GLP-1 weight-loss medicines might cut back drug- and alcohol-related overdoses and deaths

a hand holds a cigarette over an ashtray next to a beer on the table

SimpleImages/Getty Photographs

From on-line boards to the clinic, folks have reported that diabetes and weight-loss medicine akin to Ozempic and Wegovy can dramatically quell their compulsive behaviors—together with cravings for alcohol and nicotine. The swell of anecdotes has spurred a wave of preliminary trials and one-off research which have principally investigated particular substance use problems individually. However researchers haven’t grasped how broad the consequences is likely to be.

Now a big epidemiological research printed right this moment within the BMJ means that glucagonlike peptide 1 (GLP-1) medicines—as these medicine are referred to as—cut back the danger of all types of substance use problems, together with these involving alcohol, nicotine, hashish, opioids and cocaine. Not solely did GLP-1 medicine seem to forestall folks from creating these addictions, however additionally they decreased charges of life-threatening occasions, together with drug-related overdoses and deaths.

Seeing reductions throughout each dysfunction, “I used to be like, ‘Is that this actual?’ as a result of there may be nothing prefer it,” says scientific epidemiologist Ziyad Al-Aly, lead writer of the research and chief of analysis and improvement on the U.S. Depart of Veterans Affairs St. Louis Well being Care System. “That is an weight problems and diabetes drug; this isn’t an habit drug. So the large shock was: it was persistently working throughout all substances.”


On supporting science journalism

Should you’re having fun with this text, contemplate supporting our award-winning journalism by subscribing. By buying a subscription you might be serving to to make sure the way forward for impactful tales concerning the discoveries and concepts shaping our world right this moment.


The evaluation adopted greater than 600,000 folks with kind 2 diabetes within the U.S. VA well being care system for 3 years. Members who took GLP-1 medicines for diabetes have been in contrast with these on one other diabetes remedy that has not been linked to lowering habit. In veterans with no historical past of a substance use dysfunction, GLP-1 medicine have been related to a 14 p.c lowered danger throughout all substance use situations, with the biggest drop—25 p.c—seen in opioid use problems.

The widespread preventive results “didn’t shock me, but it surely’s actually good to see,” says Patricia “Sue” Grigson, a neuroscientist and habit researcher on the Pennsylvania State College Faculty of Medication, who was not concerned within the research. She additionally notes that the protecting advantages took impact within the first 12 months and persevered by the third 12 months of remark.

The research additionally checked out individuals who had an current substance use dysfunction. These outcomes have been placing: beginning a GLP-1 remedy was linked to a 31 p.c discount in substance-use-disorder-related emergency division visits, a 26 p.c decline in associated hospital admissions, a 39 p.c lower in overdoses and a 25 p.c discount in suicidal ideation or makes an attempt. Drug-related deaths have been lower by 50 p.c.

The hyperlink between GLP-1s and discount in drug-related deaths “is basically highly effective,” says Alex DiFeliceantonio, a neuroscientist who research urge for food at Virginia Tech and wasn’t concerned within the new analysis. This discovering is especially interesting for remedies, she says.

Precisely how GLP-1 medicines may diminish drug cravings and curb habit continues to be unclear. Al-Aly suggests it might need to do with overlapping reward pathways within the mind.

“Folks taking GLP-1 medicine usually describe the quieting of ‘meals noise,’ the fixed psychological chatter about meals and consuming,” he says. “I feel one thing related could also be occurring with habit: a quieting of what I consider as ‘drug noise,’ the relentless craving that pulls folks again to a substance.”

The brand new weight-loss medicines mimic the intestine hormone GLP-1, which ramps up insulin manufacturing and satiety. The hormone’s receptors are additionally discovered within the mind’s mesolimbic system—circuits that management reward, motivation, impulse management and stress, Al-Aly says. These circuits are energetic in animal research of habit. If GLP-1 medicine work equally on this mind circuit in people, they “may truly dampen or put the brakes on cravings” of all types, he says. Additional investigation of GLP-1 may reveal a “frequent biologic pathway on the root of all addictions”—one which may finally be druggable, he provides.

DiFeliceantonio notes that one other necessary a part of the mechanism probably lies within the intestine: her staff’s latest analysis means that GLP-1 medicines’ skill to gradual digestion may also play a job in folks reducing again alcohol consumption.

The brand new research’s inhabitants was largely older white male veterans. A subset of ladies within the dataset confirmed related developments in reductions, nevertheless. The analysis additionally didn’t study various dosages or totally evaluate all of the varieties of GLP-1 medicine.

On the subject of GLP-1 medicine, “we have to actually begin determining what are the simplest ones and what’s the simplest dose” for doable habit remedies, DiFeliceantonio says. Earlier research present some clues: analysis has proven totally different GLP-1 medicine have efficacy in opposition to alcohol use dysfunction, smoking and opioid craving, even at low doses. Grigson is now main a multisite scientific trial testing Ozempic as an opioid use dysfunction remedy.

“Some [GLP-1] medicines are going to work higher for some folks than others,” Grigson says. “Now we have rather a lot to study nonetheless concerning the applicable routine.”

It’s Time to Stand Up for Science

Should you loved this text, I’d prefer to ask on your assist. Scientific American has served as an advocate for science and trade for 180 years, and proper now would be the most important second in that two-century historical past.

I’ve been a Scientific American subscriber since I used to be 12 years outdated, and it helped form the best way I have a look at the world. SciAm at all times educates and delights me, and conjures up a way of awe for our huge, lovely universe. I hope it does that for you, too.

Should you subscribe to Scientific American, you assist be sure that our protection is centered on significant analysis and discovery; that we now have the sources to report on the choices that threaten labs throughout the U.S.; and that we assist each budding and dealing scientists at a time when the worth of science itself too usually goes unrecognized.

In return, you get important information, charming podcasts, sensible infographics, can’t-miss newsletters, must-watch movies, difficult video games, and the science world’s finest writing and reporting. You possibly can even reward somebody a subscription.

There has by no means been a extra necessary time for us to face up and present why science issues. I hope you’ll assist us in that mission.

A number of Brokers Auditing Your Callaway and Sant’Anna Diff-in-Diff (Half 3)

0


Final week I had Claude implement a “multi-analyst design” throughout 5 Callaway and Sant’Anna packages (2 in python, 1 in R, 2 in Stata) on a hard and fast dataset, a common baseline, ~20 covariates to select from for satisfying conditional parallel tendencies, and the identical goal parameter. You’ll find that right here:

However in that submit, I didn’t end reporting it. So I filmed myself going by means of the “lovely deck” it produced. And on this substack, I’ll clarify in phrases what it discovered, however the gist is that there was surprisingly great amount of variation between packages but additionally inside packages within the variation within the outcomes throughout 15 complete runs. Right here’s the video:

And I simply needed to once more thank everybody for supporting the substack. It has been very nice to put in writing about Claude Code these final two and a half months since my first submit December thirteenth, 2025. I’ve been studying so much over these 28 posts (!). I actually take pleasure in making an attempt to determine how one can use Claude Code for “sensible empirical analysis” — on a regular basis, run of the mill, sort of utilized stuff. And it’s been a bit like making an attempt to experience a wild stallion. In order that’s been enjoyable too as I wanted the thrill.

In case you are a paying subscriber — thanks. I admire your assist. And in the event you aren’t a paying subscriber, keep in mind that the Claude Code posts are all free for round 4 days, however then they go behind the paywall. The extra typical non-AI posts are often randomized behind the paywall, however as Claude Code is a brand new factor, and I need to assist folks see its worth for sensible empirical work, I submit these without cost, they sit open for 4 days, after which all of them go behind the paywall. Hopefully that’ll be sufficient that can assist you see what’s occurring. However maybe that is the day you are feeling like changing into a paying subscriber! At solely $5/mo, it’s a deal!

Breaking down what I discovered

So in the event you return to that earlier entry (half 2 particularly), you’ll get the breakdown of what this experiment is about. And if you wish to watch the video, that’ll assist too. However let’s now dig in. It’s also possible to evaluation the “lovely deck” too if you’d like.

There’s two “goal parameters” on this train of be aware. The primary are occasion research which combination as much as all relative time durations and if cohorts differ in measurement, and the weights are proportional to cohort measurement, then it’s a weighted common utilizing cohort measurement as weighting on the acceptable locations. I additionally am balancing the occasion research which suggests the identical cohorts seem in all l=-4 to l=+4 relative time durations. Due to this fact no compositional change in who’s and isn’t in every of those occasion research plots.

In order that’s one of many parameters. And the opposite is I’ll then take a easy weighted common over l=0 to l=+4 (thus every weight there’s 0.2 off an already weighted common that used cohort measurement as weights for the occasion research). And due to this fact in these, we’ll see each.

Forest plot of common level estimates

The primary although is a forest plot of the 15 estimates throughout all 5 language-packages. And that is fairly fascinating as a result of discover that whereas all of them are constructive results, 2 of them have big confidence intervals, and considered one of them (python’s diff-diff run 1) is massive and statistically totally different, not simply from zero, however from 11 of the others. The remainder are round 0.4 on common.

Occasion research

Listed below are the person level estimates from the occasion research all laid on prime of one another:

So you’ll be able to see right here some odd issues. First, it seems that two of our runs didn’t obey orders to make use of the common baseline. Discover that there are two graphs rising from round l=-3. That’s the python package deal diff-diff. I’ll present you that in a second. The others are similar-ish, however they bear taking a look at extra intently. I’ll focus on them so as. However earlier than I do, let’s simply remind ourselves which runs selected which covariates with this graphic:

Python’s variations packages

First I regarded on the three runs from variations written in python. Now right here’s an odd factor although — why are these 95% confidence intervals huge? Look how they vary just about from -4 to +4. Why is that? In order that’s one thing I need to higher perceive, however for now discover simply that they do, and that the variations inside this are covariates of two, 4 and 6 chosen. You may see above which of them that was.

Python’s diff-diff

Subsequent is Isaac Gerberg’s bold diff-diff package deal which he has been constructing all this yr. What did we discover there? You may evaluation the covariates chosen above to interpret these outcomes.

Okay, effectively this can be a unusual one. Why? As a result of Claude actually refused to do what I stated right here which was use g-1 because the common baseline. In two of them, it used g-5 because the baseline. And in considered one of them, it did use g-1, however didn’t estimate the pre-treatment coefficients.

It is a good place to simply pause and say one thing about utilizing g-5 as your baseline. That’s completely discover to do. However be aware that while you do it, whereas your goal parameter stays the identical — we’re in every of those estimating aggregated ATT(g,l) parameters — the parallel tendencies assumption is altering. And that’s as a result of in Callaway and Sant’Anna, therapy results are estimated utilizing whichever baseline you choose and are all the time “lengthy variations” too. The pre-treatment coefficients could be both lengthy or quick variations, however the therapy results can solely be lengthy variations. Which implies the parallel tendencies assumption is all the time lengthy variations too. In reality when doubtful, simply keep in mind that in CS, no matter is the outline of the traits of your ATT(g,t) goal parameter (e.g., covariates, weights) carries over to your parallel tendencies assumption too, solely it’s then a diff-in-diff easy 2×2 estimated as lengthy variations within the potential final result, Y(0), itself.

Which is to say that whereas the goal parameters are the identical right here, the parallel tendencies assumption is shifting round. And that’s discover when you have balanced pre-trends, however right here we don’t, which is itself fascinating, and that as a consequence the python variations packages is discovering a lot bigger coefficients in submit and pre than others do. It doesn’t look like an error a lot as Claude didn’t perceive the documentation and couldn’t due to this fact determine how one can implement what I requested. That is one thing I’m engaged on subsequent.

Authentic R did

Subsequent I regarded on the occasion research plots for R’s did. And this one once more selected totally different units of covariates, however right here’s what worrying — the one which selected 3 (out of 21 covariates thoughts you) reveals indicators of tendencies, whereas the one with only one covariate seems flatter. So which is it? Which of these three do you assume satisfies conditional parallel tendencies?

Give it some thought for a second. What number of papers have you ever ever seen that reveals variation in estimates throughout covariate choice and package deal choice? How about none? What do you usually see as a substitute? You extra typically see robustness to which diff-in-diff estimator — the well-known “all of the diff-in-diff plotted on prime of one another” graphic. However you don’t ever see somebody exploring covariates or packages. Shifting on.

Stata: csdid2 and csdid

And at last, I have a look at the 2 Stata packages: the generally used csdid (out there from ssc) and csdid2 (now archived). I included it as a result of Claude selected it, so I included it as a result of I feel folks might need to use it because it’s quick. Although in our case it was not as quick as R. There’s not so much to report besides to say that relying on which and what number of covariates to situation on, we would see indicators of rising tendencies — some extra worrisome than others

After which I checked out csdid. Claude stated they selected similar covariates. But it surely’t not clear to me then why pre-trends are totally different. I didn’t dig into this as a result of I’m rerunning the entire thing now with 20 brokers per package deal (and yet one more further one) to get 120 estimates, after which I’ll dig into that. However right here’s that occasion research.

Variation Inside and Throughout Packages

Okay, so now that is the bizarre consequence. The extra covariates included, the bigger the purpose estimates altogether. You may see that right here, however discover that the majority of that is coming from the variation throughout packages which you’ll see in the event you look intently on the colours. That massive outlier on the far proper is definitely the python diff-diff one which used each 8 covariates (therefore why it’s up there), but additionally used g-5 as its baseline. And bear in mind due to slight rising tendencies within the pre-period, by the point it will get to the tip of the durations, it’s bought a head begin and is rising. That’s why it’s averaging out to 1.75.

However even throwing out that outlier, you’ll be able to see nonetheless the correlation — the extra covariates included, the bigger the ATT estimate we discover, although all of those brokers use the identical dataset, technique, baseline, estimator and double strong, given solely the directions to “select covariates that fulfill parallel tendencies”. So just by choosing kind of covariates, you get bigger or smaller results alone.

However then the opposite dimension the place there’s a variety of variation is the between packages. There may be variation inside too, however round 77% of it comes from the choice of which software program package deal to make use of. See right here:

Supporting Causes

Now I need you to assume again to the final time you noticed a chat utilizing diff-in-diff when the researcher listed covariates. Did they spend extra time explaining the estimator or did they spend extra time explaining the covariates and the rationale for every considered one of them? And did they ever point out which package deal they used? I wager good cash that talked extra in regards to the CS estimator and the assumptions than they did covariates or package deal, and doubtless didn’t point out the package deal as a result of why would they? Aren’t all of the packages giving the identical factor?

So I don’t know but the reply to the very last thing as a result of in no single case did any two brokers use the identical covariates throughout packages. So that’s nonetheless one thing I’ll look into however it’s not right here as a result of this isn’t about software program robustness; it’s about discretion on a single dimension — covariate choice. And I wager you not often have you ever heard somebody spend as a lot time explaining what and why these selected covariates as they did discuss in regards to the precise econometrics, proper? The brokers have been instructed to elucidate their rationale and right here it was:5

There’s mainly causes they gave for covariate choice they usually’re listed above. Now I need you to think about for a minute you’re in a chat, and somebody does clarify their alternative. Would you object to their rationalization? In all probability not. You’re extra more likely to object to the estimator (“why are you utilizing TWFE?”) than you’re to the covariates, and but covariates are driving variation in estimates on this experiment. So simply let that sink in.

How massive of a distinction? Considered one of them R brokers utilizing did selected solely state mounted results (bear in mind that is municipality information so there’s variation inside a state mounted impact for estimating that first stage). For it, the purpose estimate was 0.17.

However now have a look at the agent given python’s diff-diff who selected 8 covariates. Its estimate is nearly 11 occasions bigger. And each documented their causes, and arguably all of them have been defensible. I don’t know why Claude stated this can be a six fold enhance when 1.83/0.17=10.765 however no matter.

How massive are the “non-standard errors”?

Now to the perfect half. Given now we have 15 estimates, aggregated into that easy ATT, weighted over all post-treatment ATT(g,l) from 0 to 4, what can we do to measure the uncertainty on this parameter. Let’s pause for a second and evaluation what the usual error means within the first place.

Below repeated sampling, you get a sequence of hypothetical samples from which you’d run CS on every one, after which while you have a look at the distribution of these estimates, you may have a random variable that varies in accordance with that fashions software to its personal i.i.d. drawn pattern. And the usual deviation in that sampling distribution is what our normal errors are supposed to seize. Within the regular distribution, 95% of chance as a single unit is inside roughly 2 normal deviations from the imply. And thus the p-value relies on the sampling distribution of the t-statistic – what p.c of t-statistics are greater than 1.96?

Okay, however then what about if now we have a mounted pattern, mounted dataset, however solely permit for discretion in covariate choice and package deal? And we permit n to be 15, with 3 runs per package deal? Effectively that’s not what our normal errors are supposed to seize although that may have a distribution. And if it has a distribution, it has variance, imply and normal deviation. However I even have the usual error for every run, in addition to the purpose estimate.

So what now we have right here is simply that however summarized. And verify this out. The purple field on the fitting is normal deviation throughout all my level estimates (together with the outlier from python diff-diff). And it’s a 0.442 normal deviation. But when I took the common of all 15 normal errors, that’s 0.185. And thus we get a 2.4 occasions bigger normal deviation in our 15 estimates than the common normal error for these 15 estimates.

So now let that sink in. The usual errors we report are primarily based on repeated sampling. They’re designed to measure the statistical uncertainty related to the pattern and its correspondence to the inhabitants parameter. They’re not meant to measure uncertainty by which group labored on the mission. And but there’s that as a result of every group is choosing a special covariate mixture even with the identical dataset and even with the identical estimator and even with the identical experimental design.

So what’s subsequent?

So subsequent on the menu is just a few issues. They’re:

  1. I’ll add within the doubleml package deal. It’s totally different structure, however it’s an inexpensive one to make use of. However in any other case it’s all 5 of those I reviewed right here plus a sixth.

  2. I’ll enhance the runs from 3 to twenty brokers per package deal giving me 120 estimates when it’s completed.

  3. I’ll proceed to make use of CS in all of those with the exception that doubleml might be totally different in its personal implementation, however that apart. Similar donor pool of covariates, similar not-yet-treated comparability group, similar common baseline (hopefully mounted this time for python’s diff-diff), and so on.

  4. However now I’m going to permit for any covariates starting from none to all and any mixture it desires. The one rule is “select what satisfies conditional parallel tendencies”.

  5. Every run will compute the standardized distinction in means on covariates utilizing baseline values averaged. It’s not ideally suited, however no matter — I would like this to not go on for 1,000,000 years.

  6. And every will now have the choice to make use of IPW, regression adjustment or double strong, and if Stata, then whichever DR it desires.

So I’m simply going to be rigorously documenting this all and possibly on the finish of this, I’ll have discovered one thing, and in that case, hopefully all of us will be taught one thing. However I’m changing into increasingly of the opinion that we must be documenting this package deal and covariate choice much more rigorously than we’re. Packages all use the identical assumptions and are speculated to be figuring out the identical parameters. Their variations must be issues like pace and effectivity within the CPU. It shouldn’t be precise variations in numbers calculated. Customary errors might be totally different attributable to bootstrapping which makes use of random seeds, however the level estimates must be the identical given the identical covariates and modeling of these covariates within the first stage.

However the covariate choice and the way these are modeled — that’s the different two issues I’m making an attempt to pin down. So let’s see what we discover. I’ll have textual content by every agent explaining their causes and I could ship all 120 of these causes to openai to have gpt-4o-mini classify them. That could be overkill, however I really like doing that and so might do it once more.

That’s it! Keep tuned!

Getting Began with iOS App Observability

0


Once you launch your app to the shop and folks world wide begin utilizing it, you’ll quickly discover out that there are some odd bugs individuals are experiencing and it’s not straightforward to have a full image of what could be going improper.
It may be straightforward to repair a crash from crash logs. However what if a characteristic isn’t working and also you’re out of concepts of what could possibly be occurring and the one factor you may consider is “I want I may debug the app on this person’s cellphone” which after all is never achievable.

On this tutorial, you’ll be taught the fundamentals of app observability, and perceive how your app behaves within the fingers of your clients.

Alongside the best way, you’ll:

  • Create an account of Grafana Cloud.
  • Combine Open-Telemetry Framework to your iOS app.
  • Ship telemetry data out of your app and visualize it on Grafana.

You possibly can be taught extra about App observability from the App Observability Module.

Getting Began

You’re going to begin off by making a free account on Grafana.com.

You should use Google account, Github, Microsoft or Amazon to hyperlink it straight with grafana.

When you achieve this, Grafana will ask you to create a Stack which will probably be your individual portal.

It will probably take a few minutes whenever you open it for the primary time.

As soon as it’s open, go to the “House” web page from the aspect menu. It will take you to a Getting Began Information web page.


Grafana Home Menu

From the listing of choices, Select OpenTelemetry


Select OpenTelemetry in Grafana Homepage

It will take you to the OpenTelemetry Information web page for OpenTelemetry integration.

Beneath Select your setup choose Quickstart, then choose Java for example.

OpenTelemetry Setup page

Enter the identify of the app that you just’ll connect with Grafana. Kind themet then faucet on Create token.

Generate Token - Entering Application Name

It will deliver you to the setup it is advisable to connect with Grafana. Beneath the “Add automated instrumentation” part, you’ll want two issues:

  • The authentication token from the OTEL_EXPORTER_OTLP_HEADERS parameter (hidden within the screenshot).
  • The URL to your Grafana stack from the OTEL_EXPORTER_OTLP_ENDPOINT parameter.

"Authentication=Fundamental [YOUR_TOKEN]"

Copy the stack URL, and generated Token

Retailer these values , as you’ll be utilizing them in the entire module. Then, press the Proceed button to complete. It is best to see a affirmation message saying “You’re all set!”.

Integrating OpenTelemetry Framework

To get began, open the pattern challenge within the starter folder. It’s a cell app that allows you to search the Metropolitan Museum of Artwork with any key phrase and lists the outcomes on display screen.

The Homepage of The Met app

Begin by including OpenTelemetry’s SPM bundle from https://github.com/open-telemetry/opentelemetry-swift.

Xcode will ask you to decide on which targets to combine in your challenge. Select the next:

  • OpenTelemetryApi
  • OpenTelemetryProtocolExporter
  • OpenTelemetryProtocolExporterHTTP
  • OpenTelemetrySdk

Establishing your OpenTelemetry Tracing

Within the challenge navigator, create a brand new folder beneath the challenge file and identify it Observability, then create a brand new swift file named OTelSpans.swift in that folder. Change the code within the new file with the next:


import Basis
import OpenTelemetryApi // 1
import OpenTelemetrySdk
import OpenTelemetryProtocolExporterCommon
import OpenTelemetryProtocolExporterHttp
import ResourceExtension

public class OTelSpans {
  non-public static var shared = OTelSpans() // 2

  var grafanaHttpExporter: OtlpHttpTraceExporter! // 3
  // 4
  let grafanaToken = "" // Set your Grafana token right here
  let grafanaEndpoint = "" // Set your URL right here
  
  non-public init() { // 5
  } 
}
  1. First you import the targets you added to the challenge by SPM.
  2. You’ll create a singleton occasion of the category you created OTelSpans and also you’ll use the shared occasion throughout the challenge.
  3. The OtlpHttpTraceExporter is the combination object that creates the connection between OpenTelemetry framework in your app and Grafana
  4. Set these properties to the token and URL you copied from Grafana within the earlier part. Integration received’t work with out them.
  5. Lastly, an empty non-public init technique. You’ll be filling it in now.

Add this to the non-public initializer:


let grafanaEndpoint = URL(string: "(grafanaEndpoint)/v1/traces")! // 1
let grafanaHeaders = OtlpConfiguration(headers: [("Authorization", "Basic (grafanaToken)")],
                                       exportAsJson: true) // 2
    
grafanaHttpExporter = OtlpHttpTraceExporter(endpoint: grafanaEndpoint,
                                            config: grafanaHeaders) // 3
    
let spanProcessor = SimpleSpanProcessor(spanExporter: grafanaHttpExporter) // 4
    
OpenTelemetry.registerTracerProvider(tracerProvider:
                                     TracerProviderBuilder()
  .with(useful resource: DefaultResources().get())
  .add(spanProcessor: spanProcessor)
  .construct()
) // 5
  1. Create a URL occasion with the the Grafana server.
  2. Create the configuration Grafana must authenticate the connection towards your Grafana stack that you just created. Its a tuple of strings, and specified the information to be despatched as JSON.
  3. Create an occasion of OtlpHttpTraceExporter utilizing the URL and the configuration.
  4. Create a processor that will probably be accountable for sending the spans by your exporter. You’ll be utilizing the straightforward processor for now.
  5. Lastly, You register the hint supplier by its builder. The builder can take some sources to incorporate with each span, that are a set of key-value pairs to incorporate in all of the spans. Then move the builder the processor you simply created.

Add this technique to OTelSpans proper after the initializer:


inner func tracer(  // 1
  scopeName: String
) -> any Tracer {
  let instrumentationScopeVersion = "semver:0.1.0"  // 2
  
  let tracer = OpenTelemetry.occasion.tracerProvider.get(  // 3
    instrumentationName: scopeName,
    instrumentationVersion: instrumentationScopeVersion)
  return tracer
}
  1. The brand new technique takes a string as a parameter and returns an object conforming to Tracer protocol.
  2. To get the tracer occasion, it is advisable to specify an instrumentation identify, which is the parameter handed to the strategy, and a model which you’re setting as a continuing. The “semver” is brief for Semantic Model.

  3. Lastly, you get the tracer object from OpenTelemetry and return it.

Now so as to add a way to create a span, add the next after the final technique on the finish of the category:


public func createSpan(  // 1
  scopeName: String,
  identify: String
) -> (any Span) {
  var spanBuilder = tracer(scopeName: scopeName)  // 2
    .spanBuilder(spanName: identify)
  
  let span = spanBuilder.startSpan()  // 3
  
  return span
}
  1. The tactic takes two string parameters, as soon as for the scope identify that you just’ll use to get a tracer, the second is the identify of the span.
  2. Utilizing tracer(scopeName: String) technique you created earlier, you get the tracer and thru it a span builder whereas defining a reputation for any span created from this builder.
  3. From the builder, get a span that’s already began. Beginning the span is equal to setting the beginning date to Date.now.

Since OTelSpans is maintaining the shared property and the initializer non-public, it could make the utilization of your characteristic straightforward by making a category technique that forwards the decision to its personal inner occasion. Add this technique:


public class func createSpan(
  scopeName: String,
  identify: String
) -> (any Span) {
  shared.createSpan(
    scopeName: scopeName,
    identify: identify)
}

Embed Amazon Fast Suite chat brokers in enterprise purposes

0


Organizations can face two vital challenges with conversational AI. First, customers want solutions the place they work—of their CRM, assist console, or analytics portal—not in separate instruments. Second, implementing a safe embedded chat of their purposes can require weeks of improvement to construct authentication, token validation, area safety, and world distribution infrastructure.

Amazon Fast Suite embedded chat helps remedy the primary problem by bringing conversational AI straight into your purposes, so customers can question structured information, search paperwork, and set off actions with out switching instruments.

On this submit, we present you find out how to remedy the second problem with a one-click deployment answer to embed the chat brokers utilizing the Fast Suite Embedding SDK in enterprise portals.

Answer overview

The answer deploys a safe internet portal for the embedded chat utilizing Amazon CloudFront for world content material supply, Amazon Cognito for OAuth 2.0 authentication, Amazon API Gateway for REST API endpoints, AWS Lambda for serverless API processing, and OpenID Join (OIDC) federation for identification integration with the Fast Suite.

The answer implements defense-in-depth safety with a number of layers of safety: DDoS safety on CloudFront, a non-public Amazon Easy Storage Service (Amazon S3) bucket with origin entry management serving to stop direct entry to frontend belongings, AWS WAF charge limiting safety on API Gateway, and JSON Net Token (JWT) signature validation utilizing Amazon Cognito public keys earlier than producing time-limited user-specific embed URLs with least-privilege AWS Id and Entry Administration (IAM) permissions.

The next diagram illustrates the answer structure.

The workflow consists of the next steps:

  1. Customers entry the online portal URL, which routes to CloudFront.
  2. CloudFront makes use of origin entry management to fetch HTML, CSS, and JavaScript information from a non-public S3 bucket.
  3. The online utility checks for a legitimate authentication token and redirects unauthenticated customers to the Amazon Cognito hosted UI for OAuth 2.0 login.
  4. Customers enter credentials on the Amazon Cognito login web page, which validates them and redirects again to the CloudFront URL with a single-use authorization code.
  5. The appliance extracts the authorization code and makes an HTTPS API name to API Gateway, which passes via AWS WAF charge limiting.
  6. API Gateway invokes a Lambda perform with the authorization code.
  7. The Lambda perform makes a server-to-server HTTPS name to the Amazon Cognito OAuth token endpoint, exchanging the authorization code for JWT tokens (ID token, entry token, refresh token).
  8. The perform validates the ID token’s cryptographic signature utilizing Amazon Cognito public keys JSON Net Key Set (JWKS) with thread-safe caching.

The next is a decoded JWT instance:

{"at_hash": "abcdefifB5vH2D0HEvLghi", "sub": "12345678-abcd-1234-efgh-123456789012", "email_verified": true, "iss": "https://cognito-idp.us-east-1.amazonaws.com/us-east-1_EXAMPLE123", "cognito:username": "12345678-abcd-1234-efgh-123456789012", "origin_jti": "abcd1234-5678-90ef-ghij-klmnopqrstuv", "aud": "1a2b3c4d5e6f7g8h9i0j1k2l3m", "event_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", "token_use": "id", "auth_time": 1704063600, "exp": 1704067200, "iat": 1704063600, "jti": "abcdef12-3456-7890-abcd-ef1234567890", "e-mail": "user123@instance.com"}

  1. The Lambda perform calls the AWS Safety Token Service (AWS STS) AssumeRoleWithWebIdentity API with the verified ID token to imagine the IAM internet identification position and obtain short-term AWS credentials.
  2. The perform makes use of the short-term credentials to name the Fast Suite ListUsers API to confirm the person exists, then calls the GenerateEmbedUrlForRegisteredUser API to assist generate a safe embedded URL with area restrictions.
  3. The perform returns the embed URL in a JSON response with cross-origin useful resource sharing (CORS) headers via API Gateway to CloudFront. The next is an embed URL instance:
    {"ChatEmbedUrl": "https://us-east-1.quicksight.aws.amazon.com/embedding/abcdefe827dd4ef8b4e1fb921db046c4/fast/chat?code=Abcdef....&identityprovider=quicksight&isauthcode=true", "person": "user123@instance.com"}

  4. The CloudFront utility makes use of the Fast Suite Embedding SDK to create an embedding context and render the chat interface in an HTML iframe with safe cross-origin communication.

You possibly can deploy the answer with the next high-level steps:

  1. Deploy the serverless infrastructure utilizing the AWS Cloud Improvement Equipment (AWS CDK).
  2. Provision customers in Amazon Cognito and Fast Suite.
  3. Share the Fast Suite belongings (chat agent and related connections, information base).
  4. Entry the online portal to make use of Fast Suite chat brokers.

Stipulations

The next conditions are required to deploy the answer demonstrated on this submit:

Deploy serverless infrastructure utilizing AWS CDK

Full the next steps to deploy the serverless infrastructure utilizing the AWS CDK:

  1. Clone the GitHub repository:
git clone git@github.com:aws-samples/sample-quicksuite-chat-embedding.git 
cd sample-quicksuite-chat-embedding

  1. Deploy the infrastructure:

You can be prompted to enter your AWS Area code, AWS CloudFormation stack ID and portal title, and your AWS CLI profile.

Provision customers in Amazon Cognito and Fast Suite

Full the next steps to provision customers in Amazon Cognito and Fast Suite:

  1. Create an Amazon Cognito person in an Amazon Cognito person pool:
python scripts/create_cognito_user.py --profile  

  1. Create a federated person in Fast Suite:
python scripts/create_quicksuite_user.py --profile  

Share Fast Suite chat agent

Full the next steps to share your Fast Suite chat agent:

  1. Register to the Fast Suite console utilizing credentials with the Fast Suite Creator Professional position.
  2. Select Chat brokers within the navigation pane.
  3. Choose the brokers you need to share (for instance, AnyCompany Ecom order assistant) and select Share.

  1. Seek for the person title (for instance, user123@instance.com) you created earlier.
  2. Select Share.

After sharing this agent, you additionally must share every linked useful resource of the agent individually to verify full performance.

Entry internet portal to make use of the Fast Suite chat brokers

Full the next steps to entry the online portal and begin utilizing the chat brokers:

  1. Search for the short-term password within the Amazon Cognito verification e-mail.
  2. Entry the CloudFront URL out of your internet browser with the person ID and short-term password.
  3. You can be prompted to vary your password at your first login.

After the profitable login, you’ll be able to see My Assistant within the chat interface.

  1. Select the Area to connect with the customized Fast Suite chat brokers.

  1. To see the chat brokers shared with you, select Shared with me below Filter.

  1. Select the agent you need and begin chatting.

The next screenshots present chat interactions of a customer support consultant monitoring an instance on-line order and processing its return as requested by a verified buyer over the telephone.

Clear up

To wash up your sources, delete the AWS sources deployed:

Conclusion

This answer addresses core challenges for embedding conversational AI at scale: securing authentication for 1000’s of concurrent customers throughout world places, sustaining enterprise-grade safety with complete audit trails, and simplifying deployment with automated infrastructure provisioning. You possibly can customise the portal branding, modify safety insurance policies, and combine with present identification suppliers. You possibly can scale to 1000’s of concurrent customers mechanically whereas sustaining pay-as-you-go pricing.

To do this answer, clone the GitHub repository and deploy the entire infrastructure with one click on to embed Fast Suite chat brokers.


In regards to the authors

Satyanarayana Adimula is a Senior Builder in AWS Generative AI Innovation & Supply. Leveraging over 20 years of knowledge and analytics experience, he focuses on constructing agentic AI techniques that allow massive enterprises to automate advanced workflows, speed up decision-making, and obtain measurable enterprise outcomes.