Saturday, April 25, 2026
Home Blog

ADT confirms information breach after ShinyHunters leak risk

0


Dwelling safety large ADT has confirmed a knowledge breach after the ShinyHunters extortion group threatened to leak stolen information until a ransom is paid.

In an announcement shared at the moment, the corporate stated it detected unauthorized entry to buyer and potential buyer information on April 20, after which it terminated the intrusion and launched an investigation.

This investigation decided that private info was stolen throughout the breach.

image

“The investigation confirmed that the data concerned was restricted to names, cellphone numbers, and addresses,” ADT instructed BleepingComputer.

“In a small share of circumstances, dates of beginning and the final 4 digits of Social Safety numbers or Tax IDs had been included. Critically, no cost info — together with financial institution accounts or bank cards — was accessed, and buyer safety methods weren’t affected or compromised in any manner.”

ADT says the intrusion was restricted and that it has contacted all affected people.

ShinyHunters leak web site itemizing

This assertion follows ADT’s itemizing on the ShinyHunters information leak web site, the place attackers claimed to have stolen 10 million information containing clients’ private info.

“Over 10M information containing PII and different inside company information have been compromised. Pay or Leak,” reads the info leak web site.

“It is a remaining warning to achieve out by 27 Apr 2026 earlier than we leak together with a number of annoying (digital) issues that’ll come your manner.”

ADT listing on the ShinyHunters data leak site
ADT itemizing on the ShinyHunters information leak web site

ADT didn’t affirm the quantity of information theft claimed by the attackers.

ShinyHunters instructed BleepingComputer they allegedly breached ADT via a voice phishing (vishing) assault that compromised an worker’s Okta single sign-on (SSO) account. Utilizing this account, the risk actors claimed they accessed and stole information from the corporate’s Salesforce occasion.

Since final yr, the extortion group has been conducting widespread vishing campaigns that focus on staff and BPO brokers’ Microsoft Entra, Okta, and Google SSO accounts.

After getting access to a company SSO account, the risk actors steal information from related SaaS functions similar to Salesforce, Microsoft 365, Google Workspace, SAP, Slack, Adobe, Atlassian, Zendesk, Dropbox, and lots of others.

This stolen information is then used to extort the corporate into paying a ransom, or the info might be leaked.

ADT has beforehand disclosed information breaches in August and October 2024 that uncovered buyer and worker info.


article image

AI chained 4 zero-days into one exploit that bypassed each renderer and OS sandboxes. A wave of latest exploits is coming.

On the Autonomous Validation Summit (Could 12 & 14), see how autonomous, context-rich validation finds what’s exploitable, proves controls maintain, and closes the remediation loop.

Declare Your Spot

Watch the Artemis II astronauts have enjoyable with bubbles

0


Whereas house exploration is severe and typically harmful scientific work, that doesn’t imply that there isn’t any room for enjoyable. One thing as mundane as slightly ball of water might be supremely entertaining.

In a video shared by NASA, Artemis II astronauts Reid Wiseman, Christina Koch, and Jeremy Hansen are seen watching a ball of water floating round in zero-gravity. The water itself is transferring round and shaping the sunshine round it in some surprisingly advanced methods.

With none drive pulling the water downward, floor rigidity molds the liquid right into a floating sphere. The sunshine then bends contained in the bubble, distorting and inverting pictures. In keeping with retired NASA astronaut Karen Nyberg, water like this affords a easy physics lesson and reminder that what see all is dependent upon how we glance it.

Wiseman can also be no stranger to enjoying with water in house. Throughout a mission in 2014, he and different crew members aboard the Worldwide Area Station (ISS) explored water’s floor rigidity in microgravity. They even went so far as placing a water-proof digicam inside a bubble to get a water’s-eye view of zero-G. 

Area Station Astronauts Develop a Water Bubble in Area

On April 10, the Artemis II crew—Commander Wiseman, pilot Victor Glover, and mission specialists Koch and Hansen—splashed down after their historic 10-day mission. Alongside the best way, they surpassed Apollo 13’s document for farthest crewed spaceflight and captured breathtaking pictures of the far facet of the moon. Additionally they ate a variety of scorching sauce and troubleshooted relatable rest room troubles. Their scientific work additionally will assist put together future astronauts to dwell and work on the moon, as NASA builds a future Moon Base and appears in direction of additional expeditions to Mars.

 

products on a page that says best of what's new 2025

2025 PopSci Better of What’s New

 

Laura is Widespread Science’s information editor, overseeing protection of all kinds of topics. Laura is especially fascinated by all issues aquatic, paleontology, nanotechnology, and exploring how science influences day by day life.


ParaRNN: Giant-Scale Nonlinear RNNs, Trainable in Parallel

0


Recurrent Neural Networks (RNNs) are naturally suited to environment friendly inference, requiring far much less reminiscence and compute than attention-based architectures, however the sequential nature of their computation has traditionally made it impractical to scale up RNNs to billions of parameters. A brand new development from Apple researchers makes RNN coaching dramatically extra environment friendly — enabling large-scale coaching for the primary time and widening the set of structure decisions out there to practitioners in designing LLMs, significantly for resource-constrained deployment.

In ParaRNN: Unlocking Parallel Coaching of Nonlinear RNNs for Giant Language Fashions, a brand new paper accepted to ICLR 2026 as an Oral, Apple researchers share a brand new framework for parallelized RNN coaching that achieves a 665× speedup over the standard sequential strategy (see Determine 1). This effectivity achieve permits the coaching of the primary 7-billion-parameter classical RNNs that may obtain language modeling efficiency aggressive with transformers (see Determine 2).

To speed up analysis in environment friendly sequence modeling and allow researchers and practitioners to discover new nonlinear RNN fashions at scale, the ParaRNN codebase has been launched as an open-source framework for computerized training-parallelization of nonlinear RNNs.

Speedup from Parallel RNN Coaching

Determine 1: Runtime comparability for parallel and sequential software of the tailored ParaGRU and ParaLSTM cells as a perform of enter sequence size. ParaRNN unlocks training-time parallelizability, permitting dramatic speedups over vanilla sequential software.

Efficiency of Giant-Scale Traditional RNNs

Determine 2: Perplexity (decrease is best) for varied mannequin sizes for Mamba2, ParaLSTM, ParaGRU, and a transformer. With large-scale coaching enabled by parallelization, the tailored GRU and LSTM fashions present perplexity aggressive with a transformer and Mamba2.

The computational value of the eye mechanism in a transformer grows quadratically with sequence size, whereas the computation required for a single ahead cross via an RNN is identical no matter how a lot context got here earlier than. This permits constant-time token technology throughout inference, making them significantly enticing for environment friendly deployment.

However there’s a catch: this effectivity benefit solely applies at inference time. Not like transformers, RNN coaching can’t be parallelized alongside the sequence size.

The very property that makes RNN environment friendly at inference — their sequential, recurrent construction — turns into a basic bottleneck throughout coaching. Not like the eye mechanism, which may course of all tokens in a sequence concurrently, an RNN software should be unrolled step-by-step, as illustrated in Determine 3.

Elementary trade-off between RNNs and Consideration

Determine 3: The elemental trade-off between RNNs and a focus. RNNs preserve a compact hidden state hlh_l

Fashionable recurrent architectures have leveraged a intelligent workaround to allow sequence parallelization: simplifying the recurrence relationship to be purely linear within the hidden state. Selective state area fashions (SSMs) like Mamba use a recurrence within the kind:

SSMs: hl=Alhl1+Blxl,textual content{SSMs: } h_{l} = A_l h_{l-1} + B_l x_l,

whereas classical RNNs embrace nonlinearities:

RNNs: hl=σl(hl1,xl).textual content{RNNs: } h_l = sigma_l(h_{l-1}, x_l).

Linearity permits parallelization as a result of linear operations are associative, that means the order by which you mix them doesn’t have an effect on the ultimate end result, identical to (2+3)+4=2+(3+4)(2+3)+4=2+(3+4)

A Compromise for Parallelizability: Linearity

Determine 4: How one can flip a sequential associative operation (like cumulative sum) right into a parallel one: the computation followers out throughout many parallel employees that every deal with a partial end result, that are progressively merged collectively.

Linearity, nevertheless, comes with its personal set of limitations: the sorts of hidden state evolutions that may be modeled by a linear system is diminished in its selection, which has a direct impression on the general expressivity of the RNN mannequin. The query then turns into: should we sacrifice expressivity for velocity, or can we’ve each?

The important thing perception is that we can have one of the best of each worlds, by adapting Newton’s technique — a traditional numerical approach for fixing nonlinear equations. Moderately than tackling the complicated system straight, Newton’s technique works by iteratively constructing and fixing an approximation (within the type of a extra tractable linear system) which is used to progressively refine the answer to the goal nonlinear one.

In Determine 5 we illustrate how we will apply this to our state of affairs. As an alternative of considering of the RNN as a sequence of sequential steps, we reframe your entire sequence as a single system of equations, the place the hidden states throughout all steps are unknowns to resolve for concurrently. Newton’s technique solves this method iteratively, approximating the nonlinearities with a linearization given by their native derivatives (i.e., their Jacobians). That is the place the magic occurs: the linearized RNN system has precisely the identical kind as a linear SSM, with the Jacobians taking part in the function of the state matrices AlA_l

Whereas this strategy introduces some overhead with respect to computing the RNN software sequentially, if the Newton iterations converge shortly sufficient (which we empirically noticed for well-designed RNN fashions), we will successfully get well the total nonlinear RNN habits in a fraction of the time, due to parallelization.

Newton’s Methodology Utilized

Newton’s method applied to RNN parallelization
Determine 5: Newton’s technique utilized to RNN parallelization. We first collate the recurrence relationships for every recurrent step right into a single system, which includes all hidden states concurrently. The system is then iteratively linearized and solved, to get well a greater and higher approximation to the goal hidden state evolution. Crucially, the linearizations themselves symbolize the evolution of an SSM, which may then be solved effectively by way of parallel scan.

In our experiments, we apply this system to diversifications of two classical RNNs: the GRU and LSTM cells, and obtain convergence constantly with simply three iterations. In different phrases, with three fastidiously designed, parallel SSM purposes, we will get well the identical hidden state evolution as with the sequential software of a nonlinear RNN — dramatically decreasing wall-clock time at coaching.

Whereas the ParaRNN framework can in precept be utilized to any RNN, some cautious engineering remains to be required to make it sensible for large-scale coaching. The parallel discount algorithm on the coronary heart of the tactic must effectively assemble, retailer, and multiply collectively the Jacobian matrices arising from the linearization. For generic RNNs, these Jacobians are dense, which makes their storage develop quadratically and their multiplication cubically with hidden state measurement — a price intractable for large-scale fashions.

We tackle this following the design rules from fashionable SSMs like Mamba, and introduce the ParaGRU and ParaLSTM cells: diversifications of the classical GRU and LSTM cells that yield structured Jacobians. Specifically, we simplify the matrices within the cells’ definition to solely have nonzero parts of their primary diagonal. This ensures that their Jacobians are additionally diagonal (for ParaGRU) and block-diagonal (for ParaLSTM), as outlined in Determine 6.

General: dense
Determine 6: Vanilla GRU and LSTM cells embrace mixing amongst hidden state parts, which causes their Jacobians to be dense normally. In our work, we simplify the cells definitions to forestall mixing. The ensuing Jacobians are closely structured: a diagonal matrix for the GRU cell, and a 2×2 block diagonal matrix for the LSTM cell.

To get one of the best speedups, we then implement customized CUDA kernels to carry out environment friendly parallel discount of Jacobians presenting these constructions. Within the design of our kernels, we make certain to carefully comply with the GPU reminiscence hierarchy to maintain the information as native as potential. Because of this, our fully-fused implementation handles Newton iterations, system meeting, and parallel discount in a single kernel, reaching outstanding speedups over Mamba, In Determine 7 we offer timing outcomes for the three implementations we offer to use our ParaRNN cells in parallel: in pure PyTorch, in PyTorch with CUDA-accelerated discount, and fully-fused in CUDA.

Ahead Move via ParaGRU and ParaLSTM Cells

Determine 7: Timing for a ahead cross via a ParaGRU (left) and a ParaLSTM (proper) cell, for varied implementations of the ParaRNN parallelization methodology: in pure PyTorch, utilizing customized CUDA kernels for under the parallel discount operation, and with a fully-fused kernel for the entire Newton routine. The fully-fused implementations are aggressive with the Mamba one throughout all sequence lengths.

To validate the effectiveness of the ParaRNN framework in enabling sensible coaching of nonlinear RNNs at scale, we skilled fashions starting from 400M to 7B parameters on language modelling duties. Our objective is to check how classical RNNs can carry out as LLMs, as soon as large-scale coaching turns into possible by way of parallelization.

The outcomes present that even classical RNNs make for aggressive LLMs when skilled on the 7B scale. Each by way of sheer perplexity and downstream job efficiency (see Desk 1), ParaGRU and ParaLSTM obtain scores similar to transformers and state-of-the-art SSMs.

Mannequin #params ↓ PPL

↑ Arc-C
(25)   (3)

↑ HSwag
(10)   (0)

↑ OBQA
(10)   (0)

↑ WinoG
(5)   (0)

↑ PiQA
(0)

↑ MMLU
(0)

Mamba2 6.96B 8.62 40.02 39.59 69.78 69.68 42.20 42.20 65.19 63.77 76.66 26.61
ParaLSTM 6.76B 9.16 37.46 36.52 62.47 62.85 42.20 38.80 57.70 59.12 75.19 25.31
ParaGRU 6.76B 9.19 39.68 36.77 65.85 65.75 42.20 40.40 61.40 59.83 76.66 25.29
Transformer 6.89B 9.55 34.30 33.36 62.98 62.20 40.00 37.20 61.48 60.85 74.97 23.12

Desk 1: Fashions accuracy on downstream duties from the lm-eval-harness analysis suite (variety of photographs in brackets). Total, the RNN fashions attain efficiency similar to Mamba and transformer throughout the board.

The actual benefit of recurrent fashions shines at inference time, as proven in Determine 8. The constant-time token technology of RNNs means sustaining excessive throughput no matter context size, making them an interesting selection for purposes the place reaching quick technology is paramount.

Determine 8: In distinction to transformer, whose technology time will increase with sequence size, recurrent fashions like Mamba and RNNs require a relentless time, unbiased of sequence size, to generate every new token — thus leading to increased throughput.

Furthermore, the flexibility to incorporate nonlinearities within the recurrence step definition dramatically boosts efficiency on artificial duties requiring state monitoring and retrieval capabilities. These are helpful to check the mannequin’s potential to keep up and replace significant info in its inside state, and to retailer and get well related info when wanted. The improved efficiency outlined in Desk 2 signifies that nonlinear RNNs present superior expressivity over linear ones, and are value contemplating to design extra highly effective fashions.

Mannequin MQAR 𝑘-hop Parity
Transformer 100% 78% 53%
Mamba2 100% 98% 51%
ParaGRU 100% 100% 100%
ParaLSTM 100% 100% 100%

Desk 2: Together with nonlinearities into the recursion definition permits to realize
superior efficiency on state monitoring and recall duties over purely linear RNNs like Mamba.

Finally, our experiments present that scalability was the lacking piece within the RNN puzzle all alongside. At billion scale, they carry out in addition to fashionable language fashions, and boast superior expressivity and sooner throughput as well. These outcomes open the door to reconsidering nonlinear recurrence in fashionable sequence modelling: the nonlinearity vs coaching effectivity trade-off isn’t basic — it was only a consequence of computational limitations, which we will now overcome.

To speed up progress and allow additional exploration nonlinear RNN fashions at scale, the ParaRNN codebase has been launched as an open-source framework. Researchers and practitioners can simply concentrate on designing the RNN cell — the framework takes care of every part else.

To outline a customized cell in ParaRNN, simply implement its recurrence step:

class MyRNNCell( BaseRNNCell ):
    def recurrence_step( self, h, x, system_parameters ):
        h_new = ... 
        return h_new

The framework robotically handles Newton’s technique software, Jacobian meeting, parallel discount routines, and optimizations for structured Jacobians.

  • Pure PyTorch: For prototyping. Makes use of computerized differentiation and works for any RNN, however is just not optimized for scale
  • CUDA-Accelerated: For generic cells with (block-)diagonal Jacobians. Makes use of customized kernels for parallel discount
  • Absolutely-Fused: Single-kernel implementation fusing the entire Newton routine. Requires CUDA implementation of recurrence step

The modular design makes it simple to experiment with customized cells, Jacobian constructions, and solver configurations—it suffices to inherit from the out there base lessons and implement the precise discount operations.

RNN software is just not inherently sequential anymore. For the primary time, classical RNNs will be skilled on the scale of billions of parameters with coaching instances and efficiency matching fashionable architectures — unlocking the inference effectivity and expressivity benefits that made recurrence interesting within the first place.

ParaRNN opens the door to exploring nonlinear recurrence at scale: experimenting with novel architectures and pushing the boundaries of what’s potential with recurrent fashions is now simpler than ever. Nonlinear RNNs are again and able to scale.

Meta’s compute seize continues with settlement to deploy tens of tens of millions of AWS Graviton cores

0

Meta is constant its compute seize because the agentic AI race accelerates to a dash.

Right this moment, the corporate introduced a partnership with Amazon Net Companies (AWS) that may deliver “tens of tens of millions” of AWS Graviton5 cores (one chip incorporates 192 cores) into its compute portfolio, with the choice to increase as its AI capabilities develop. It will make the Llama builder one of many largest Graviton clients on the earth.

The transfer builds on Meta’s expansive partnerships with almost each chip and compute supplier within the enterprise. It’s working with Nvidia, Arm, and AMD, in addition to constructing its personal inner coaching and inference accelerator chip.

“It feels very tough to maintain monitor of what Meta is doing, with all of those chip offers and bulletins round in-house growth,” mentioned Matt Kimball, VP and principal analyst at Moor Insights & Technique. This makes for “thrilling instances that inform us simply how extremely invaluable silicon is true now.”

Controlling the system, not simply scale

Graphics processing items (GPUs) are important for giant language mannequin (LLM) coaching, however agentic AI requires an entire new workload functionality. CPUs like Graviton5 are rising to this problem, supporting intensive workloads like real-time reasoning, multi-step duties, frontier mannequin coaching, code era, and deep analysis.

AWS says Graviton5 has the flexibility to deal with “billions of interactions” and to coordinate advanced, multi-stage agentic duties. It’s constructed on the AWS Nitro System to help excessive efficiency, availability, and safety.

“That is actually about management of the AI system, not simply scale,” mentioned Kimball. As AI evolves towards persistent, agentic workloads, the position of the CPU turns into “fairly significant;” it serves because the management airplane, dealing with orchestration, managing reminiscence, scheduling, and different intensive duties throughout accelerators.

“That is very true in agentic environments, the place the workloads shall be much less linear and extra stateful,” he identified. So, making certain a provide of those assets simply is smart.

Reflecting Meta’s diversified method to {hardware}

The settlement builds on Meta’s long-standing partnership with AWS, but additionally displays what the corporate calls its “diversified method” to infrastructure. “No single chip structure can effectively serve each workload,” the corporate emphasised.

Proving the purpose, Meta not too long ago introduced 4 new generations of its MTIA coaching and inference accelerator chip and signed a huge deal with AMD to faucet into 6GW value of CPUs and AI accelerators. It additionally entered right into a multi-year partnership with Nvidia to entry tens of millions of Blackwell and Rubin GPUs and to combine Nvidia Spectrum-X Ethernet switches into its platform, and was additionally one in all Arm’s first main CPU clients.

Within the wake of all this, Nabeel Sherif, a principal advisory director at Information-Tech Analysis Group, posed the burning query: “What are they going to do with all this capability?”

Primarily it should help Meta’s inner experimentation and innovation, he mentioned, nevertheless it additionally lays the groundwork and offers the capability for Meta to supply its personal agentic AI providers, for example, its Llama AI mannequin as an API, to the market.

“What these [services] will appear like and what platforms and instruments they’ll use, in addition to what guardrails they’ll present to customers, remains to be unclear, nevertheless it’s going to be fascinating to see it develop,” mentioned Sherif.

The expanded capability will allow a variety of use circumstances and experimentation throughout numerous architectures and platforms, he mentioned. Meta may have many choices, and entry to produce in an surroundings at the moment characterised not solely by all kinds of recent CPU approaches, however by vital provide chain constraints. The AWS deal needs to be considered as a complement to its partnerships and investments in different platforms like ARM, Nvidia, and AMD.

Kimball agreed that the transfer is “most positively additive,” not a alternative or substitution. Meta isn’t shifting off GPUs or accelerators, it’s constructing round them. “That is about assembling a heterogeneous system, not selecting a single winner,” he mentioned. “In actual fact, I feel for many, heterogeneity is important to long run success.”

Nvidia nonetheless dominates coaching and lots of inference, whereas AMD is turning into “an increasing number of related at scale,” Kimball famous. Arm, in the meantime, whether or not by CPU, customized silicon or different efforts, provides Meta architectural management, and Graviton5 suits into that blend as a “cost- and efficiency-optimized general-purpose compute layer.”

A query of technique

The extra fascinating query is round technique: Does this sign Meta is turning into a compute supplier? Kimball doesn’t suppose so, noting that it’s doubtless the corporate isn’t trying to straight compete with hyperscalers as a general-purpose cloud. “That is extra about vertical integration of their very own AI stack,” he mentioned.

The transfer provides them the flexibility to help inner workloads extra effectively, in addition to offering the infrastructure basis to reveal extra of that functionality externally, whether or not by APIs, partnerships, or different means, he mentioned.

And there’s a value dynamic right here, too, Kimball famous. As inference turns into persistent, particularly with agentic techniques, economics shift away from peak floating-point operations per second (FLOPS) (a measure of compute efficiency) and towards sustained effectivity and complete price of possession (TCO).

CPUs like Graviton5 are properly positioned for the components of that workload that don’t require accelerators, however nonetheless must run constantly. “At Meta’s scale, even small effectivity features per workload compound rapidly,” Kimball identified.

For builders and enterprise IT, the sign is fairly clear, he famous: The AI stack is getting extra heterogeneous, not much less so. Enterprises are going to see tighter coupling between CPUs, GPUs, and specialised accelerators, with workloads more and more cut up throughout them based mostly on conduct (prefill versus decode, stateless versus stateful, burst versus persistent).

“The implication is that infrastructure selections should turn into extra workload-aware,” mentioned Kimball. “It’s much less about ‘which cloud?’ and extra about ‘the place does this particular a part of the applying run most effectively?’”

This text initially appeared on NetworkWorld.

7 Sensible OpenClaw Use Circumstances You Ought to Know



Picture by Writer

 

Introduction

 
OpenClaw is rapidly changing into one of the vital talked-about open-source agent methods proper now. However past the hype, the actual query is easy: what are individuals truly utilizing it for?

At its core, OpenClaw helps flip AI from one thing you chat with into one thing that may truly do be just right for you. It connects messaging apps, instruments, reminiscence, automation, and brokers into one system, so as an alternative of leaping between platforms all day, you may set off duties from locations you already use, like Telegram, WhatsApp, or Discord.

On this article, we have a look at seven sensible methods persons are utilizing OpenClaw to automate duties, keep organized, and enhance productiveness with actual agent workflows.

 

1. Finance and Buying and selling Bots

 
Probably the most thrilling OpenClaw use instances is finance and buying and selling bots powered by the newest massive language fashions (LLMs).

Individuals are utilizing it to observe market information, observe value strikes, observe social sentiment, and ship helpful updates straight to their telephone. As a substitute of checking a number of dashboards and feeds all day, OpenClaw may also help pull every thing into one ongoing workflow.

With newer LLMs, these bots can do extra than simply ship alerts. They will summarize indicators, evaluate sources, and spotlight why one thing issues — making market analysis sooner and extra helpful.

Showcase hyperlink: Polymarket Autopilot.

 

2. Distant Coding and Dev Workflows

 
One other large use case is distant growth.

Individuals are utilizing OpenClaw to ship directions to coding brokers, run duties on their machine, edit recordsdata, troubleshoot points, and handle workflows even when they’re away from their laptop computer. Which means your telephone or chat app can grow to be a management layer for growth work.

It is a large shift in how individuals take into consideration productiveness. As a substitute of needing to sit down down and do each little step your self, you may hand off sure duties, test progress remotely, and hold work shifting.

Venture hyperlink: AionUi

 

3. Each day Briefings and Automations

 
This is without doubt one of the best and most sensible methods persons are utilizing OpenClaw at this time.

As a substitute of ready till you ask for one thing, OpenClaw might be set as much as ship helpful updates on a schedule. That could possibly be a morning briefing, a reminder, a process abstract, a information roundup, and even system alerts.

It’s a easy thought, however a strong one. A number of productiveness will get misplaced in checking issues manually. When the precise data reveals up mechanically, it removes friction and helps individuals keep targeted.

Showcase hyperlink: Customized Morning Transient

 

4. Private Reminiscence and Second-Mind Techniques

 
Lots of people are additionally utilizing OpenClaw as a private reminiscence layer.

They use it to seize notes, concepts, reminders, and context over time, then search or retrieve that data later. As a substitute of letting ideas disappear into scattered apps and paperwork, OpenClaw may also help hold them in a single system that’s simpler to entry.

That is the place OpenClaw begins to really feel much less like a chatbot and extra like a second mind. It helps individuals hold observe of ongoing context, not simply one-off conversations.

Showcase hyperlink: Second Mind

 

5. Analysis and Data Pipelines

 

OpenClaw can also be getting used to construct analysis workflows.

Individuals are utilizing it to assemble data, summarize sources, arrange findings, and switch uncooked data into one thing extra helpful. That may imply monitoring a subject, reviewing papers, validating concepts, or amassing insights from totally different locations.

This type of workflow saves lots of time as a result of the analysis course of normally entails too many tabs, instruments, and repeated steps. OpenClaw helps pull that into one move.

Venture hyperlink: AutoResearchClaw

 

6. Multi-Agent Techniques

 

One of many causes OpenClaw stands out is that it isn’t restricted to a single agent.

Individuals are experimenting with setups the place one agent plans, one other executes, one other critiques, and one other stories again. That makes it potential to interrupt bigger duties into smaller roles and create extra structured automation.

That is the place issues begin changing into extra highly effective. As a substitute of counting on one common assistant to do every thing, customers can create specialised workflows the place every agent has a job.

Venture hyperlink: agentscope-ai/HiClaw

 

7. Automating Enterprise Operations

 

OpenClaw can also be getting used for on a regular basis enterprise operations.

That features issues like organizing leads, drafting outreach, dealing with buyer relationship administration (CRM)-style duties, summarizing conferences, monitoring motion gadgets, and serving to small groups automate routine work. A number of this isn’t flashy, however it’s precisely the sort of work automation is helpful for.

For many individuals, the enchantment is easy: fewer repetitive duties, much less context switching, and extra time spent on precise decision-making.

Venture hyperlink: DenchClaw

 

Ultimate Ideas

 
OpenClaw continues to be early, however the way in which persons are already utilizing it’s a good signal of the place agent methods are heading. From buying and selling bots and analysis workflows to reminiscence methods and enterprise automation, the actual worth comes from connecting AI to helpful actions.

What makes it stand out isn’t just that it could reply questions, however that it could monitor, arrange, automate, and report again via instruments individuals already use daily. The examples linked on this article are simply that: examples. They present what is feasible, not the complete restrict of what OpenClaw can do.

That’s a part of the enchantment. As a substitute of counting on one mounted instrument or a single extension, persons are utilizing OpenClaw to create customized workflows that match the way in which they really work. You may even use OpenClaw to assist construct an answer for nearly any workflow you take into account. From there, the actual work is testing, refining, and optimizing it so it really works effectively in your wants.

That shift is what makes OpenClaw really feel much less like a demo and extra like one thing genuinely helpful. Individuals are not simply putting in instruments. They’re constructing their very own methods round the way in which they work greatest.
 
 

Abid Ali Awan (@1abidaliawan) is an authorized information scientist skilled who loves constructing machine studying fashions. Presently, he’s specializing in content material creation and writing technical blogs on machine studying and information science applied sciences. Abid holds a Grasp’s diploma in expertise administration and a bachelor’s diploma in telecommunication engineering. His imaginative and prescient is to construct an AI product utilizing a graph neural community for college students fighting psychological sickness.

The tiny iOS 26.4.2 replace that arrived this week is definitely a very large deal

0

Altra Promo Codes: Get 20% Off Plus Free Transport

0


Altra Operating’s distinctive broad toe field and nil drop heel-toe design don’t facilitate marathon PRs or punchy 5Ks. However there’s a motive why Leonardo DiCaprio selected to put on all-weather Lone Peaks when enjoying Bob Ferguson in One Battle After One other. They’re snug, rugged, and the zero-drop design retains you from getting tendonitis. Altras are the proper shoe for surviving in an off-grid dystopian struggle, even when they’re not on the prime of the checklist with regards to the finest trainers.

Get As much as 50% Off on the Altra Operating Sale

The Lone Peaks are iconic and immediately recognizable. However don’t fear, when you don’t need the precise mannequin that Bob Ferguson has, the mountaineering model of the Lone Peaks can also be on sale for $36 off. The truth is, there are tons of various types of males’s footwear proper now, together with as much as 50% off choose types.

Save 20% on Choose Altra Operating Footwear

Possibly you’re investigating Altra trainers as a result of a extra widely-known model, like Brooks or Nike, has a toe field that’s simply too small. In that case, Altra doesn’t completely make path trainers. The Fwd Through is Altra’s model of the Brooks Glycerin—the ultra-cushioned highway working shoe. It’s additionally on sale, together with numerous trainers at 20% off.

Unlock 15% Off: SMS Welcome Provide

One of many best methods to avoid wasting at Altra working is to join textual content messages from them. When you join, you’ll obtain an Altra SMS welcome supply that’s good for 15% off your subsequent buy. When you obtain it and also you’re prepared to purchase, you’ll simply must enter the Altra promo code at checkout, and click on “Apply A Promo Code.” simply know that the Altra promo code is legitimate for under 30 days after you obtain and is legitimate for a one-time use solely. (Plus, it’s not legitimate at any third celebration retail retailer or web site.)

Get a ten% Altra Promo Code When You Signal Up

I’ve run in a mannequin of the Altra Lone Peaks for a decade. The place I reside is the unofficial path working capital of the US, and everybody sports activities a pair of those footwear whereas mountaineering or on the bar. It’s completely price it to join the loyalty program and get the Altra Operating promo code. (I additionally do that with my youngsters, who completely put on Salomons.) While you join the Altra loyalty program, you’ll get 10% off as a thanks.

10% Off Altra Olympus 275

The Altra Olympus 275 for each males and ladies simply would be the hottest path working shoe of the season. The Altra Olympus 275 is finest for technical path and all-day path working, with elevated sturdiness, safety, and light-weight building to face the hardest trails. Plus, it has a custom-designed MATRYX one-piece woven higher that’s each light-weight and protecting. A grippyVibram Megagrip outsole helps with traction on moist and dry surfaces to make sure that it doesn’t matter what kind of terrain or climate situations you encounter on the path, you’ll be set. Plus, whenever you signal as much as be an Altra member, you’ll get 10% off your Altra Olympus 275 pair.

Rating Free Customary Transport on All Altra Orders

You will get free normal supply in 5-7 enterprise days (nevertheless, there’s no weekend supply) without spending a dime whenever you order by Altra! If you happen to want your footwear in a rush, there’s 2-day transport (with no weekend supply) for $16.95, and next-day supply (Saturday and Sunday orders ship on Mondays) for $22.

Store the New Altra Operating Footwear

Altra Operating has simply dropped their new line in time for winter, The Frozen Coast Assortment, with colorways impressed by the gorgeous, icy tundra of Iceland. Some trainers we’ve been eyeing on this assortment embrace the Lone Peak 9+, with Vibram Megagrip outsole for fierce grip and midsole foam for a snug connection to the bottom under. And there’s additionally the 9+ GTX, additionally geared up with a Vibram Megagrip outsole that offers tons of traction in moist or dry situations, an Altra EGO midsole for additional cushion, and a shortened toe cap for prolonged flexibility, whether or not you’re mountaineering or path working.

Stata Convention Chicago 2011 Name for Displays

0


The 2011 Stata Convention can be held on July 14 and 15 on the College of Chicago’s Gleacher Middle. I’ve loved assembly many enthusiastic Stata customers at earlier Stata Conferences, and I’m wanting ahead to seeing each acquainted and new faces this yr in Chicago.

The organizing committee just lately posted a name for shows on Statalist. That posting is included beneath.

To submit an summary for a presentation, or to register for the convention, go to the convention webpage.

From: Phil Schumm <[email protected]>
To: Statalist <[email protected]>
Topic: st: Stata Convention 2011 in Chicago
Date: Wed, 15 Dec 2010 08:21:32 -0500

On behalf of the organizing committee, I want to invite everybody to take part within the Stata Convention 2011, to be held July 14-Fifteenth in Chicago. The assembly can be held within the College of Chicago’s Gleacher Middle, proper on the Chicago river within the coronary heart of downtown. Chicago is a superb place to go to in the summertime, and the placement of the convention will make it simple to make the most of all town has to supply.

Beneath is the decision for shows. This yr’s organizing committee consists of Lisa Barrow, Scott Lengthy, Richard Williams, and myself. Please contact considered one of us if you want to debate an concept for a presentation or have questions on this system format. For these of you who haven’t attended a Stata customers group assembly earlier than, giving a presentation is a superb alternative to share what you might be doing in Stata with others, and to get suggestions from (and educated) customers and from StataCorp builders. And, as an added bonus, in case your summary is accepted for presentation, the convention registration charge can be waived (presenting creator solely).

We stay up for seeing everybody in Chicago!

— Phil

    Announcement and name for shows

The Stata Convention 2011 can be held on the College of Chicago Graduate Faculty of Enterprise’ Gleacher Middle. The Gleacher Middle is situated downtown on the financial institution of the Chicago River, simply steps from Michigan Avenue and inside strolling distance of most downtown sights.

Stata customers’ conferences are satisfying and rewarding for Stata customers in any respect ranges and from all disciplines. This yr’s program will encompass a combination of consumer shows, longer talks by invited presenters, and talks by StataCorp builders. As well as, this system will embrace the ever-popular “Needs and Grumbles” session during which customers have a possibility to share their feedback and ideas immediately with builders from StataCorp.

All customers are inspired to submit abstracts for potential shows. Displays on any Stata-related subject can be thought of, together with (however not restricted to) the next:

  • new user-written instructions, together with instructions for modeling and estimation, graphical evaluation, knowledge administration or reporting
  • use or analysis of current Stata instructions
  • strategies for instructing statistics with Stata or Stata use itself
  • case research of Stata use in novel areas or functions
  • surveys or critiques of Stata amenities in particular fields
  • comparisons of Stata to different software program, or use of Stata along with different software program

Consumer shows ought to be both 15 or 25 minutes lengthy, every adopted by 5 minutes for questions. Longer talks can be on the discretion of the scientific committee.

Please submit an summary of not more than 200 phrases (ASCII textual content, no math symbols) by utilizing the online submission kind at http://repec.org/chi11/chi11.php. All abstracts have to be acquired by March 14, 2011. Please be sure that to incorporate a brief, informative title, and to point whether or not you want to be thought of for a brief (15-minute) or lengthy (25-minute) presentation. As well as, in case your presentation has a number of authors, please determine the presenter. The convention registration charge can be waived for the presenter.

If you need to debate an concept for a presentation or have questions on this system format, please contact a member of the scientific organizing committee. This yr’s committee consists of

Lisa Barrow (Federal Reserve Financial institution of Chicago) <[email protected]>
Scott Lengthy (Indiana College) <[email protected]>
Phil Schumm (College of Chicago) <[email protected]>
Wealthy Williams (Notre Dame) <[email protected]>

Presenters can be requested to offer digital supplies associated to their discuss (a duplicate of the presentation and any packages/datasets, the place relevant) to the organizers in order that the supplies could be posted on the StataCorp web site and within the Stata Customers Group RePEc archive.



Recap Scaling the Future: Reddit AMA on Community Automation at Scale

0



TL;DR: Cisco’s Strategic Course

Cisco is shifting towards a unified operational mannequin that emphasizes standardization, human-in-the-loop AI, and intent-based automation.

  • AI as a “Reasoning Layer”: Cisco isn’t transferring towards absolutely autonomous AI. As a substitute, AI acts as a reasoning instrument that analyzes telemetry, detects points, and proposes validated fixes. Human approval is at all times required earlier than any configuration adjustments are pushed to the community.
  • Unified Operations (AgenticOps): The purpose is to bridge the hole between cloud-managed (Meraki) and on-premises (Catalyst) environments. Through the use of a single “pane of glass” and unified APIs, Cisco goals to simplify advanced, multi-system deployments into single, automated workflows.

Automation Greatest Practices:

  • Begin Small: Deal with high-frequency, low-variability duties (like compliance audits or website turn-ups) to show ROI.
  • Standardize: Earlier than automating, guarantee you will have a single “Supply of Reality” (e.g., NetBox, Git-based workflows). Inconsistent insurance policies scale sooner than know-how can repair them.
  • Security First: Use “circuit breakers” (auto-halt on failure) and pre-deployment modeling to stop configuration drift and restrict the “blast radius” of errors.
  • Organizational Alignment: The largest hurdle to automation isn’t the technology-it’s getting NetOps, Safety, and AppDev groups to agree on standardized processes and knowledge fashions.
  • Sources for Builders: Cisco encourages utilizing the DevNet Sandbox for experimentation and the Agentic Workflows Trade for pre-verified, drag-and-drop automation templates.

Key Takeaway: Cisco’s focus is on simplifying the administration of heterogeneous networks via unified APIs and deterministic AI, guaranteeing that engineers stay in management whereas lowering guide, repetitive labor.

Try the Reddit AMA!

Authors


Ana Nennig

Product Advertising Supervisor

Cisco Networking

Be a part of the Subsequent DevNet Occasion




A CIO’s information to scaling pace

0


A lot has been written concerning the excessive failure charges for AI initiatives. In an more and more agile world, CIOs and their organizations naturally wish to embrace the mindset captured within the guide title “Fail Quick, Be taught Sooner” — in different phrases, transfer shortly, experiment and be taught alongside the way in which. 

However too many organizations rush into AI with out the basics in place. 

Earlier than launching any AI initiative, CIOs must act like skilled mountain climbers: set up a stable base camp with their enterprise counterparts, align on the crucial enterprise issues and alternatives to be fastened, and make their organizations ready for the climb forward. 

The reason being simple: Reaching worth from AI (like several main initiative) requires self-discipline — not simply pace. That self-discipline reveals up as having a transparent technique tied to specific enterprise outcomes, with success standards, governance and compliance outlined from the beginning. From right here, prioritization is crucial. There’ll at all times be extra AI use circumstances than sources, so CIOs should concentrate on the initiatives probably to ship measurable enterprise impression — particularly as software program pricing more and more ties to a share of price financial savings and labor substitute. 

Associated:State of AI: Broadly used for planning — drives the enterprise at simply 25% of companies

Simply as necessary, CIOs must keep away from the infinite pilot entice by guaranteeing chosen AI initiatives have credible paths to scale. In any other case, pilots pile up with out connecting to actual work. 

As soon as this groundwork is in place, organizations can transfer into pilots with calculated threat — utilizing them not solely to check know-how, but in addition to rethink enterprise capabilities and processes and, often, as futurist Linda Yates suggests, “unleash the unicorn inside.” 

What truly separates pilots from manufacturing ?

Let’s dig into the anatomy of mission success after which the causes of excessive mission failure charges. 

In our analysis at Dresner Advisory Companies, I discovered three qualities that differentiate initiatives which have moved from pilots to manufacturing. 

  1. Success with enterprise intelligence (BI). This implies a corporation’s knowledge is industrialized — i.e., constant, ruled and usable at scale — so it’s AI-ready.

  2. Success with knowledge science and machine studying. This implies optimization fashions exist already for extra advanced agentic AI and, much more necessary, that the group already groks AI, so much less organizational studying is required to promote AI’s worth or price to the group. 

  3. An information chief exists. A senior knowledge chief with sturdy enterprise relationships is in place, which suggests co-creating an AI future is simpler and the correct AI initiatives for the enterprise obtain prioritization. 

Associated:Scaling AI worth calls for industrial governance

These weren’t nice-to-haves. They decided whether or not initiatives scaled. 

Given this background, I wished to listen to from a serious marketing consultant that helps companies day in and time out with their AI implementations — what are they seeing as they work with purchasers? Vamsi Duvvuri is Ernst and Younger’s AI and knowledge chief. Duvvuri argued that “AI initiatives fail when pace outpaces construction,” pointing to findings from the agency’s newest EY Know-how Pulse Ballot, which surveyed 500 U.S. enterprise leaders working within the tech business: 

  • 85% of respondents prioritize speed-to-market over in depth vetting of AI.

  • 52% of respondents reported that department-level AI initiatives are performed with out formal oversight.

  • 78% say adoption is outpacing their means to handle threat.

That is scary, and jogs my memory of what CIOs had been attempting to keep away from a number of years in the past — shadow IT that wasn’t vetted, built-in or protected. The distinction now could be that AI embeds these dangers instantly into workflows and spreads them sooner. 

Even worse, the issue extends past mission prioritization and choice, in accordance with Duvvuri. He stated that in observe, initiatives typically decelerate due to weak governance, unclear possession, poor knowledge and quite a few disconnected pilots. “The outcome is not failed ambition, it is stalled worth,” he stated. “For instance, an organization launches a number of AI pilots to assist analysts work sooner, however analysts nonetheless reconcile knowledge, handle complexity and noise, and sew collectively choices between these a number of pilot initiatives. Worth reveals up briefly, then ultimately plateaus.” 

Associated:7 behaviors of the AI-savvy CIO

This apparently properly circles again to the three qualities recognized firstly of this part. 

Why extra pilots did not create extra worth 

Our Dresner knowledge reveals that 15% of organizations are in manufacturing with agentic AI and 34% are in manufacturing with some type of generative AI-based options. Our expectation is that the combination 34% are organizations which have the three success standards above — BI maturity, AI and machine studying expertise, and a powerful knowledge chief.

In the meantime, 34% of organizations are experimenting with agentic AI; 53% stated they’re experimenting with generative AI. That these numbers aren’t nearer is shocking, but it surely implies IT organizations can roll out a tactical generative AI answer with out fixing underlying knowledge and governance and with out deliberating enterprise priorities. 

Given this, a query stays: how do organizations create house for pilots that ship strategic, measurable, manufacturing worth? 

Clearly, accountable AI have to be designed into operations. Professor Pedro Amorim suggested that CIOs run a venture-style portfolio: funding many small, time-boxed bets, studying shortly, and doubling down on the winners with a transparent path to industrialization.

He added that on the similar time, organizations want “fundamental guardrails in place early (knowledge classification, privateness/IP guidelines, human-in-the-loop for delicate choices, analysis benchmarks, and specific no-go standards), and should be sure that there’s funds on the entrance of the funnel, so you are not compelled into one or two huge bets.”

So, good experimentation consists of sturdy knowledge integrity, embedded cybersecurity and ongoing monitoring for points like bias and mannequin drift. 

Belief is what makes AI sustainable. Transparency, governance, coaching and clear human oversight are important so staff perceive how AI works and the place human judgment nonetheless issues. 

“Good experimentation means deciding the place complexity ought to reside. It’s the CIO’s position to make sure brokers soak up variability and orchestration, whereas people retain judgment and demanding determination‑making,” Duvvuri stated. 

In observe, that requires fewer, extra disciplined experiments — anchored to actual workflows, not remoted duties. This issues as a result of organizations do want to maneuver shortly. However pace with out management amplifies breakdowns. For that reason, Duvvuri emphasised that “the difficulty is management, not momentum.”

As an alternative of piloting AI to “help” customer support reps, he stated, a CIO ought to sponsor an experiment the place brokers deal with triage, decision and routing circumstances finish‑to‑finish, then escalate to people just for exceptions, coverage judgment and buyer empathy. 

Profitable pilots show not simply accuracy, however operability. “Good experimentation requires an AI-native method to software program supply,” he stated. 

Account for threat from Day 1

Our analysis at Dresner reveals that the key dangers that CIOs and knowledge leaders are nervous about embody the next:

  • Knowledge safety/privateness issues.

  • High quality/accuracy of responses.

  • Potential for unintended penalties.

  • Authorized and regulatory compliance.

So how do good organizations anticipate, assess and mitigate AI dangers from the beginning? 

The organizations that thrive have a CIO who brings individuals collectively throughout the group to co-create wanted guardrails. It’s crucial to do not forget that minimizing threat is not about slowing innovation. It is about alignment and shared function. 

For that reason, Duvvuri stated that “threat have to be designed in Day 1. As a result of AI accelerates motion, unmanaged utilization creates publicity,” he stated, pointing to EY knowledge exhibiting that 45% of know-how leaders report a confirmed or suspected delicate knowledge leak tied to unauthorized generative AI use, and 39% report IP leakage. 

That is not a tooling downside —  it is a design failure. 

CIOs must standardize authorised platforms, embed controls instantly into workflows, and clearly outline the place brokers act autonomously versus the place people should intervene, he stated. Executed proper, governance turns into a scale enabler, not a brake on innovation. 

Duvvuri recommended that CIOs set up authorised AI instruments, actual‑time monitoring for knowledge and IP threat, and clear authority to halt noncompliant deployments. 

“Groups will transfer sooner as a result of secure habits is constructed into the system, not enforced after the actual fact. As intelligence turns into cheaper and extra obtainable, enterprises do not get less complicated by default. The winners intentionally shift complexity from people to machines, whereas protecting judgment, belief and accountability firmly with individuals,” he stated. 

Agile with self-discipline: Construct the inspiration first

CIOs ought to apply agile ideas to AI — however not with out self-discipline. Organizations want a transparent technique tied to specific enterprise outcomes, with success standards, governance, and compliance outlined from the outset. Knowledge maturity and well-defined guardrails are important. This basis permits smarter experimentation whereas accounting for threat from the beginning. Extra mature organizations have a head begin as a result of they’ve already addressed many of those challenges. For CIOs in much less mature environments, the precedence is obvious: spend money on the processes and knowledge capabilities wanted to generate early wins — then refine, scale, and industrialize knowledge and enterprise processes.