Sunday, March 1, 2026
Home Blog

Flip your pockets right into a trackable good gadget with these slim playing cards for twenty-four% off

0


‘Tremendous agers’ with nice reminiscence have extra younger mind cells

0


Adults whose brains nonetheless have sturdy neuron manufacturing appear to have higher reminiscence and cognitive perform than do these in whom the flexibility wanes, finds a research printed right this moment in Nature. The authors examined mind samples from deceased donors starting from younger adults to ‘tremendous agers’ — folks older than 80 with distinctive reminiscence.

They discovered that younger and previous adults with wholesome cognition generated neurons, a course of known as neurogenesis, at excessive ranges for his or her age. The staff estimated that the brand new neurons made up solely a small fraction — 0.01% — of these within the hippocampus, a mind area that’s important for reminiscence. In contrast, in folks experiencing cognitive decline, together with people with Alzheimer’s illness, neurogenesis appears to falter: the researchers noticed fewer creating, or immature, neurons in these mind samples.

Surprisingly, a gaggle of ‘tremendous agers’ had a fair larger variety of immature neurons than did different teams, and considerably greater than did these with Alzheimer’s. Nevertheless, the group sizes have been small, so the findings weren’t all statistically important.


On supporting science journalism

When you’re having fun with this text, think about supporting our award-winning journalism by subscribing. By buying a subscription you might be serving to to make sure the way forward for impactful tales in regards to the discoveries and concepts shaping our world right this moment.


Maura Boldrini Dupont, a neuroscientist and psychiatrist at Columbia College in New York Metropolis, says that the small dimension of the teams — every had ten or fewer people — is a motive to take the outcomes with a grain of salt.

Understanding the instruments that the mind makes use of to generate neurons and keep cognitive perform in previous age might assist researchers to develop medication that induce neurogenesis in folks with cognitive decline, says co-author Orly Lazarov, a neuroscientist on the College of Illinois Chicago.

Controversy over neurogenesis

The findings help the concept folks’s brains proceed to generate neurons even in maturity. However that concept hasn’t all the time been accepted.

Within the early 1900s, neuroscientist Santiago Ramón y Cajal recommended that the human mind couldn’t type neurons after beginning. Ultimately, researchers discovered that neurogenesis did happen in childhood, however nonetheless thought that was the endpoint.

“That’s what they used to show after I went to medical faculty,” Dupont says.

Prior to now few a long time, nevertheless, this dogma was challenged by new proof supporting neurogenesis within the grownup hippocampus, fuelling an ongoing debate in neurobiology.

Though researchers know that neurogenesis happens in some grownup animals, together with mice and primates, they haven’t been capable of agree on whether or not it occurs within the brains of human adults. That’s primarily as a result of there are extra instruments for finding out neurogenesis in animals than in people. In mice, as an illustration, researchers can inject chemical substances that hint the beginning and growth of neurons. This can’t be performed in residing folks, and analysis in human mind samples has been restricted, Lazarov says.

One software researchers have used to check neurogenesis in people, nevertheless, is protein markers. Antibodies can be utilized to detect sure proteins expressed by neural stem cells — which may flip into neurons — and immature neurons in donated mind samples. However Lazarov factors out critics’ argument “that these proteins aren’t particular sufficient and might be expressed in different cell varieties, not simply in neurogenesis”.

So scientists have turned to single-cell RNA sequencing to seek out extra particular genetic markers of neural stem cells and immature neurons within the human hippocampus.

Into the long run

Lazarov and her colleagues went a step additional of their newest research. They not solely used RNA sequencing to establish the genetic signatures of those cell varieties, but in addition uncovered their epigenetic signatures. Epigenetic markers are DNA modifications that management gene expression. The staff used an assay that pinpoints components of a cell’s DNA which are primed for expression to find out these signatures. Dupont says that the assay is a powerful level of the research.

Lazarov says that the following step can be to grasp the perform of the neurons generated within the grownup mind. “What we’d like is purposeful validation of those cells, to inform what they’re doing within the human mind,” she says, including that this might require new imaging strategies which are delicate sufficient to detect this exercise.

This text is reproduced with permission and was first printed on January 25, 2026.

It’s Time to Stand Up for Science

When you loved this text, I’d wish to ask in your help. Scientific American has served as an advocate for science and business for 180 years, and proper now would be the most crucial second in that two-century historical past.

I’ve been a Scientific American subscriber since I used to be 12 years previous, and it helped form the way in which I take a look at the world. SciAm all the time educates and delights me, and conjures up a way of awe for our huge, stunning universe. I hope it does that for you, too.

When you subscribe to Scientific American, you assist be sure that our protection is centered on significant analysis and discovery; that we now have the sources to report on the selections that threaten labs throughout the U.S.; and that we help each budding and dealing scientists at a time when the worth of science itself too usually goes unrecognized.

In return, you get important information, fascinating podcasts, good infographics, can’t-miss newsletters, must-watch movies, difficult video games, and the science world’s finest writing and reporting. You’ll be able to even reward somebody a subscription.

There has by no means been a extra vital time for us to face up and present why science issues. I hope you’ll help us in that mission.

Programming an estimation command in Stata: Permitting for pattern restrictions and issue variables

0


I modify the abnormal least-squares (OLS) command mentioned in Programming an estimation command in Stata: A greater OLS command to permit for pattern restrictions, to deal with lacking values, to permit for issue variables, and to cope with completely collinear variables.

That is the eighth publish within the sequence Programming an estimation command in Stata. I like to recommend that you simply begin at first. See Programming an estimation command in Stata: A map to posted entries for a map to all of the posts on this sequence.

Pattern restrictions

The myregress4 command described in Programming an estimation command in Stata: A greater OLS command has the syntax

myregress4 depvar [indepvars]

the place the indepvars could also be time-series operated variables. myregress5 permits for pattern restrictions and lacking values. It has the syntax

myregress5 depvar [indepvars] [if] [in]

A consumer might optionally specify an if expression or an in vary to limit the pattern. I additionally make myregress5 deal with lacking values within the user-specified variables.

Code block 1: myregress5.ado


*! model 5.0.0  22Nov2015
program outline myregress5, eclass
	model 14

	syntax varlist(numeric ts) [if] [in]
	marksample touse

	gettoken depvar : varlist

	tempname zpz xpx xpy xpxi b V
	tempvar  xbhat res res2 

	quietly matrix accum `zpz' = `varlist' if `touse'
	native p : phrase depend `varlist'
	native p = `p' + 1
	matrix `xpx'                = `zpz'[2..`p', 2..`p']
	matrix `xpy'                = `zpz'[2..`p', 1]
	matrix `xpxi'               = syminv(`xpx')
	matrix `b'                  = (`xpxi'*`xpy')'
	quietly matrix rating double `xbhat' = `b' if `touse'
	quietly generate double `res'       = (`depvar' - `xbhat') if `touse'
	quietly generate double `res2'      = (`res')^2 if `touse'
	quietly summarize `res2' if `touse' , meanonly
	native N                     = r(N)
	native sum                   = r(sum)
	native s2                    = `sum'/(`N'-(`p'-1))
	matrix `V'                  = `s2'*`xpxi'
	ereturn publish `b' `V', esample(`touse')
	ereturn scalar           N  = `N'
	ereturn native         cmd   "myregress5"
	ereturn show
finish

The syntax command in line 5 specifies {that a} consumer might optionally prohibit the pattern by specifying an if expression or in vary. When the consumer specifies an if expression, syntax places it into the native macro if; in any other case, the native macro if is empty. When the consumer specifies an in vary, syntax places it into the native macro in; in any other case, the native macro in is empty.

We might use the native macros if and in to deal with user-specified pattern restrictions, however these don’t account for lacking values within the user-specified variables. The marksample command in line 6 creates a neighborhood macro named touse, which incorporates the identify of a short lived variable that may be a sample-identification variable. Every remark within the sample-identification variable is both one or zero. It’s one if the remark is included within the pattern. It’s zero if the remark is excluded from the pattern. An remark will be excluded by a user-specified if expression, by a user-specified in vary, or as a result of there’s a lacking worth in one of many user-specified variables.

Traces 20–23 use the sample-identification variable contained within the native macro touse to implement these pattern restrictions on the OLS calculations.

Line 28 posts the sample-identification variable into e(pattern), which is one if the remark was included within the estimation pattern and it’s zero if the remark was excluded from the estimation pattern.

Line 29 shops the variety of observations within the pattern in e(N).

Instance 1 illustrates that myregress5 runs the requested regression on the pattern that respects the lacking values in rep78 and accounts for an if expression.

Instance 1: myregress5 with lacking values and an if expression


. sysuse auto
(1978 Vehicle Information)

. depend if !lacking(rep78)
  69

. depend if !lacking(rep78) & mpg < 30
  62

. myregress5 worth mpg trunk rep78 if mpg < 30
------------------------------------------------------------------------------
             |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
         mpg |  -376.8591   107.4289    -3.51   0.000    -587.4159   -166.3024
       trunk |  -36.39376   102.2139    -0.36   0.722    -236.7294    163.9418
       rep78 |   556.3029   378.1101     1.47   0.141    -184.7793    1297.385
       _cons |    12569.5   3455.556     3.64   0.000     5796.735    19342.27
------------------------------------------------------------------------------

. ereturn listing

scalars:
                  e(N) =  62

macros:
                e(cmd) : "myregress5"
         e(properties) : "b V"

matrices:
                  e(b) :  1 x 4
                  e(V) :  4 x 4

capabilities:
             e(pattern)   

Permitting for issue variables

Instance 1 consists of the variety of repairs as a steady variable, nevertheless it may be higher handled as a discrete issue. myregress6 accepts issue variables. Issue-variable lists often suggest variable lists that comprise completely collinear variables, so myregress6 additionally handles
completely collinear variables.

Code block 2: myregress6.ado


*! model 6.0.0  22Nov2015
program outline myregress6, eclass
	model 14

	syntax varlist(numeric ts fv) [if] [in]
	marksample touse

	gettoken depvar : varlist
	_fv_check_depvar `depvar'

	tempname zpz xpx xpy xpxi b V
	tempvar  xbhat res res2 

	quietly matrix accum `zpz' = `varlist' if `touse'
	native p                    = colsof(`zpz')
	matrix `xpx'               = `zpz'[2..`p', 2..`p']
	matrix `xpy'               = `zpz'[2..`p', 1]
	matrix `xpxi'              = syminv(`xpx')
	native okay                    = `p' - diag0cnt(`xpxi') - 1
	matrix `b'                 = (`xpxi'*`xpy')'
	quietly matrix rating double `xbhat' = `b' if `touse'
	quietly generate double `res'       = (`depvar' - `xbhat') if `touse'
	quietly generate double `res2'      = (`res')^2 if `touse'
	quietly summarize `res2' if `touse' , meanonly
	native N                     = r(N)
	native sum                   = r(sum)
	native s2                    = `sum'/(`N'-(`okay'))
	matrix `V'                  = `s2'*`xpxi'
	ereturn publish `b' `V', esample(`touse') buildfvinfo
	ereturn scalar N            = `N'
	ereturn scalar rank         = `okay'
	ereturn native  cmd             "myregress6"
	ereturn show
finish

The fv within the parentheses after varlist within the syntax command in line 5 modifies the varlist to simply accept issue variables. Any specified issue variables are saved within the native macro varlist in a canonical type.

Estimation instructions don’t enable the dependent variable to be an element variable. The _fv_check_depvar command in line 9 will exit with an error if the native macro depvar incorporates an element variable.

Line 15 shops the variety of columns within the matrix shaped by matrix accum within the native macro p. Line 19 shops the variety of linearly unbiased columns within the native macro okay. This calculation makes use of diag0cnt() to account for the superbly collinear variables that have been dropped. (Every dropped variable places a zero on the diagonal of the generalized inverse calculated by syminv() and diag0cnt() returns the variety of zeros on the diagonal.)

On line 29, I specified choice buildfvinfo on ereturn publish to retailer hidden info that ereturn show, distinction, margins, and pwcompare use to label tables and to determine which capabilities of the parameters are estimable.

Line 31 shops the variety of linearly unbiased variables in e(rank) for postestimation instructions.

Now, I take advantage of myregress6 to incorporate rep78 as a factor-operated variable. The bottom class is dropped as a result of we included a continuing time period.

Instance 2: myregress6 with a factor-operated variable


. myregress6 worth mpg trunk i.rep78 
------------------------------------------------------------------------------
             |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
         mpg |  -262.7053   73.49434    -3.57   0.000    -406.7516   -118.6591
       trunk |   41.75706    93.9671     0.44   0.657    -142.4151    225.9292
             |
       rep78 |
          2  |   654.7905   2136.246     0.31   0.759    -3532.175    4841.756
          3  |   1170.606   2001.739     0.58   0.559     -2752.73    5093.941
          4  |   1473.352   2017.138     0.73   0.465    -2480.167     5426.87
          5  |   2896.888   2121.206     1.37   0.172    -1260.599    7054.375
             |
       _cons |   9726.377   2790.009     3.49   0.000      4258.06    15194.69
------------------------------------------------------------------------------

Executed and undone

I modified the OLS command mentioned in Programming an estimation command in Stata: A greater OLS command to permit for pattern restrictions, to deal with lacking values, to permit for issue variables, and to cope with completely collinear variables. Within the subsequent publish, I present tips on how to enable choices for sturdy commonplace errors and to suppress the fixed time period.



Sensible Android AI | Kodeco

0


This guide is for Android builders of all ranges – whether or not you’re exploring generative AI for the primary time otherwise you’re an skilled engineer seeking to deepen your AI/ML experience.

  • AI Panorama & Fashionable Android Ecosystem
  • On-System vs Cloud AI Structure
  • AI-assisted coding with Gemini Chat
  • Gemini Agent Mode
  • UI Transformation with Gemini
  • Producing Exams and Documentation utilizing Gemini
  • Google’s ML Equipment Imaginative and prescient APIs
  • Constructing Customized ML Options with MediaPipe
  • Actual-time On-System LLM Chat with MediaPipe
  • Firebase AI Logic for Cloud Inference
  • Producing Pictures with Imagen 4
  • Producing Description with Gemini Mannequin
  • Play for On-System AI
  • Gemini Reside API
  • Perform Calling with Gemini
  • Accountable AI & Finest AI Practices

On this guide, you’ll learn to construct clever Android functions utilizing immediately’s strongest AI and ML instruments — from on-device capabilities with ML Equipment and MediaPipe to cloud-powered generative fashions like Gemini and Firebase AI Logic.
You’ll discover real-world examples that combine textual content, imaginative and prescient, and conversational intelligence into…


extra

This part tells you just a few issues you have to know earlier than you get began, similar to what you’ll want for {hardware} and software program, the place to search out the challenge information for this guide, and extra.

Synthetic Intelligence is reshaping the Android ecosystem sooner than any platform shift earlier than it. Just some years in the past, integrating AI right into a cell app required deep ML experience, heavy infrastructure, and complicated customized fashions. At this time, nonetheless, Google’s AI stack — from Gemini to on-device engines like AICore and ML Equipment — has made clever options accessible to each Android developer.

This primary part offers you the foundational understanding you want earlier than constructing AI-powered apps. You’ll discover how AI is remodeling Android, the way to use AI instruments to speed up improvement, and the way to get began with generative AI in your functions.

On this part, you’ll study:

  • The evolving panorama of Android AI and the forces driving this shift.

  • How on-device and cloud-based AI differ — and when to make use of every.

  • The way to use AI-assisted developer workflows, from sensible code completion to Gemini in Android Studio, Gemini Agent Mode, and AI-driven debugging.

  • Important generative AI ideas: prompts, context, tokens, and mannequin habits.

Via these three chapters, you’ll construct a powerful conceptual and sensible basis — getting ready you for the deeper, extra superior AI options explored later within the guide.

This chapter introduces the quickly evolving AI-powered Android ecosystem, explaining the rise of agentic AI, on-device AI versus cloud AI, and Google’s Gemini-driven developer instruments. It offers Android builders with a transparent basis for constructing clever, autonomous, and multimodal functions within the new AI-first period.

Uncover how Android AI supercharges your improvement workflow. This chapter focuses on the improved AI options inside Android Studio, together with Gemini in Android Studio for code technology, bug fixing, and UI transformation.

This chapter helps you navigate Google’s Android AI ecosystem. You’ll learn the way Gemini fashions work, when to make use of on-device vs cloud AI, and the way to choose one of the best Generative AI or ML resolution on your app.

By now, you’ve explored the foundations of AI on Android and discovered how immediately’s ecosystem makes it doable to construct smarter, extra adaptive apps.

This part shifts the main focus from ideas to sensible, hands-on implementation. Right here, you’ll work instantly with the core Android AI toolset — the frameworks and runtimes that energy each on-device and cloud-based intelligence. You’ll learn to select the proper method on your use case, combine AI easily into your app’s structure, and ship actual machine intelligence that feels quick, dependable, and user-friendly.

Throughout these three chapters, you’ll discover:

  • ML Equipment for On-System Intelligence: Construct doc scanners, textual content extractors, and vision-powered options that run privately and immediately on the consumer’s machine.

  • MediaPipe for Customized ML: Create your personal ML pipelines and even run light-weight LLMs on-device, unlocking versatile, real-time AI experiences tailor-made to your app.

  • Firebase AI Logic for Cloud Energy: Offload complicated or high-quality generative duties to Gemini within the cloud, mixing machine and server intelligence right into a hybrid structure.

On this course of, you’ll have a strong command of the instruments wanted to construct production-quality AI options — from imaginative and prescient to textual content to generative fashions.

This chapter introduces on-device AI in Android utilizing Google’s ML Equipment. You’ll construct a doc scanner and textual content extractor whereas studying the way to use key Imaginative and prescient and Pure Language APIs. Alongside the way in which, you’ll perceive when on-device inference is most beneficial—similar to for privateness, low latency, and offline performance—and discover the trade-offs that include operating fashions regionally on consumer gadgets.

This chapter explores the way to construct customized machine studying options utilizing MediaPipe. You’ll learn to leverage the MediaPipe framework to combine your personal ML fashions by constructing an on-device, real-time LLM chat utility, supported with sensible examples and step-by-step steering.

Discover ways to harness Firebase’s cloud-based generative AI capabilities to construct smarter, extra dynamic Android apps. This chapter walks you thru organising Firebase AI Logic, integrating fashions like Gemini and Imagen, and including AI-powered picture technology and textual content creation to raise your app’s intelligence and consumer expertise.

By this level in your journey, you’ve explored each the basics of Android AI and the core instruments that energy clever options. Now it’s time to maneuver past implementation and into the realities of delivery, scaling, and sustaining AI options in manufacturing.

On this part, you’ll study:

  • The way to package deal and ship on-device ML and GenAI fashions by way of the Play ecosystem, enabling dynamic mannequin updates, optimized distribution, and lowered app sizes.

  • The way to construct real-time, multimodal, assistant-like experiences with Gemini Reside, together with streaming audio, session administration, and performance calling for interactive brokers.

  • The way to design AI responsibly, incorporating equity, transparency, security, and consumer management into each a part of your app — from information move to UI.

  • The way to put together your AI options for manufacturing, masking monitoring, mannequin rollback, budgeting, privateness constraints, and long-term sustainability.

  • What the way forward for Android AI appears to be like like, and the way builders can adapt to the quickly evolving ecosystem.

Throughout these ultimate chapters, you’ll not solely deepen your technical experience but in addition acquire the strategic perspective wanted to construct AI-powered Android apps that scale — ethically, safely, and confidently.

We hope you’re prepared to leap in and revel in attending to know the facility of AI in Android!

Discover ways to optimize, package deal, and ship on-device AI fashions utilizing Play for On-device AI. This chapter covers supply methods, machine concentrating on, and deployment workflows that enable you construct quick, scalable, and resource-efficient AI experiences on Android.

This chapter explores the way to construct a real-time interactive Android app utilizing Gemini Reside. You’ll study to arrange and configure the Reside API, handle audio streaming periods, implement pure voice interactions, and allow perform calling so the mannequin can set off actual app actions. By the top, you’ll perceive finest practices for creating seamless, hands-free, AI-driven consumer experiences.

This chapter explores finest practices for constructing AI-powered Android functions, examines key moral concerns in AI improvement, and highlights rising developments shaping the way forward for Android AI. Readers will acquire insights into accountable AI design, methods for sustaining consumer belief, and the applied sciences that can drive the subsequent technology of clever Android apps.

How AI can construct organizational agility in 2026

0


As financial uncertainty continues into 2026, enterprise leaders are on the lookout for alternatives to extend stability and firm worth. More and more, that requires making their organizations much more adaptable and resilient. Regardless of a majority of firm leaders reporting excessive impacts from altering market situations, many are already pursuing new initiatives to take care of their stability and foster progress within the yr forward. 

At Personiv, we see six main value-creation and adaptableness tendencies rising from knowledge collected in our 2025 Govt Outlook Pulse Survey in August 2025. The survey discovered that 62% of organizations endured “extraordinarily vital” technique and execution impacts resulting from financial shifts within the first half of 2025. In response, a majority of these surveyed have already begun investing in areas to strengthen their flexibility and agility. 

AI deployment and growth

The most typical focus areas for funding are growth of AI use instances and improved operational methods, to take care of stability and make progress in uneven financial waters.

Associated:Who actually units AI guardrails? How CIOs can form AI governance coverage

Based mostly on survey responses, 76% of public firms are already utilizing AI in some operational capability, with 70% utilizing AI for finance operations similar to payroll, expense reporting and compliance. Forty-five p.c of personal firms are utilizing AI in comparable methods. A lot of that AI is probably going for automation, however agentic AI is rising as a robust useful resource for companies that need to make their operations extra resilient.

The World Financial Discussion board not too long ago highlighted the enterprise utility of AI brokers. These brokers can be taught to behave as “custodians of particular domains,” similar to compliance or provide chain administration, to not solely orchestrate advanced processes but additionally to “interpret and draw inferences from that area.” That provides new potential for future-looking methods.

Forecasting and strategic planning

The fast-improving capability to gather and analyze knowledge insights with AI helps one other development we count on to proceed by means of 2026: a powerful give attention to data-driven forecasting and planning. Near a 3rd of survey respondents rated forecasting and situation planning as a precedence, maybe partially as a result of forecasting was the world most strongly affected by financial adjustments in 2025.

Nonetheless, lower than a 3rd of survey individuals reported the usage of AI in additional than half of their finance and accounting features. That underutilization suggests forecasting and strategic planning can be an space the place AI adoption provides probably giant accuracy and effectivity positive aspects within the coming yr, particularly if financial situations proceed to fluctuate.

Associated:State of AI: Broadly used for planning — drives the enterprise at simply 25% of companies

Provide chain optimization

It is not simply inner operations that had been affected by the financial setting; financial coverage adjustments affected greater than half of firms’ provide chains this yr. Because the provide chain is already weak to components like geopolitical instability and climate occasions, this added uncertainty heightens the necessity for danger administration. Survey respondents ranked provide chain disruptions among the many three greatest rising monetary dangers they’re making ready for shifting ahead.

Many organizations are already adopting AI to cut back provide chain danger, management prices and handle complexity. AI-powered automation also can speed up standardized provide chain processes, since AI brokers have the potential to watch real-time situations, foresee potential disruptions and rapidly recommend alternate eventualities to reduce the influence of these disruptions. 

CapEx and OpEx funding

One other method companies will proceed to construct resilience subsequent yr is thru strategic capital and operational expenditures. Greater than half of the survey respondents reported elevated CapEx and OpEx spending in 2025, whereas 1 / 4 deliberate to extend their spending in these areas. This development is mirrored throughout the U.S. financial system, with CapEx progress racing forward of gross sales progress this yr. AI-related investments are driving the general improve in CapEx spending, and that is more likely to proceed by means of 2026.

Associated:AI disruption and the collapse of certainty

Many organizations are constructing capital, operational and technical foundations for a stronger yr forward. Translating these investments into larger resilience requires expertise, nevertheless, which is one other space the place many firms are investing.

Expertise sourcing and optimizing

Discovering expert, skilled finance and accounting expertise has been more and more troublesome for the previous few years, however efficiently filling these roles stays important. In as we speak’s market, finance and accounting groups spend most of their time serving to their firms regulate to situations as they modify, which suggests staffing gaps can have an effect on an organization’s capability to pivot as wanted.

Many of those groups already preserve their expertise by utilizing AI automation to deal with repetitive, high-volume processes similar to payroll processing, accounts receivable and accounts payable. However many organizations are additionally hiring, and subsequently ought to contemplate potential functions of AI within the recruitment course of. Thirty-eight p.c of survey respondents stated they plan to extend headcount within the subsequent six months, on high of the 34% who already employed extra folks in 2025. Utilizing AI for standardized processes will permit these staff to give attention to serving to their firms adapt as situations change.

Cybersecurity prioritization

Threats to organizations’ cybersecurity had been the highest risk-related concern amongst survey respondents: Thirty-two p.c reported that it is crucial rising danger for which they’re making ready. Executives and leaders are particularly involved about phishing, social engineering, ransomware and knowledge breaches. Such assaults have gotten extra widespread but additionally harder to determine as a result of criminals are leveraging AI to make their assaults more practical. 

 

Happily, AI may also be leveraged for elevated protection, and we count on to see extra firms use AI-backed cybersecurity instruments to guard their funds and knowledge. That development is already in movement. This yr, the worldwide common value of an information breach dropped by 9% from 2024, to $4.4 million, partially as a result of extra firms are utilizing AI-backed safety instruments to detect and cease assaults sooner. 

 

AI is a thread that runs by means of all these tendencies, from forecasting and provide chain optimization by means of capital investments, expertise optimization, and cybersecurity and danger administration. AI can equip enterprise management with the insights wanted for good forecasts and planning, whereas releasing up finance and accounting groups to adapt to adjustments within the financial system. When deployed thoughtfully, the worth that AI can present turns into a key ingredient for constructing resilient firms this yr.



How Tecno’s partnership with Google Cloud is taking its ‘sensible AI’ to new heights

0


Whether or not we prefer it or not, 2026 is ready to be a giant 12 months for AI, with corporations like Google, Samsung, Meta, and extra aiming to carry as many options to shoppers as doable. As showcased with current launches just like the Galaxy S26 sequence, flashy new AI options are main the dialog round new telephones, and shoppers are paying the value for such capabilities.

Nevertheless, one firm is taking a extra sensible strategy to AI, one which it hopes can attain the shoppers who actually need it. Tecno might not be an organization you’ve heard a lot about, however this comparatively small OEM has huge plans for its AI, which has helped to form its product technique, a helpful partnership with Google Cloud, and new {hardware} that it plans to point out off at MWC 2026.

Big Examine Reveals The Secret to Coronary heart Well being, And It is Not Low-Carb or Low-Fats : ScienceAlert

0


The important thing to coronary heart well being is not slicing down on pasta or potatoes, new proof suggests; it isn’t even a low-fat weight loss plan.

A examine that tracked practically 200,000 women and men within the US for round 30 years has now discovered that some low-fat and low-carb diets are higher for coronary heart well being than others.

The important thing was the standard of the meals itself, not the amount of carbs or fat.

The analysis, led by public well being researchers at Harvard College, means that if a weight loss plan comprises too many processed meals and animal proteins or fat, or if it in any other case lacks in ample greens, fruits, complete grains, wholesome fat, or important macronutrients, it might not profit cardiovascular well being as a lot in the long term, even whether it is low carb or low fats by definition.

“Focusing solely on nutrient compositions however not meals high quality could not result in well being advantages,” concludes Harvard epidemiologist Zhiyuan Wu, who led the analysis.

Individuals within the examine who ate wholesome, assorted diets with ample macronutrients confirmed greater ranges of ‘good’ ldl cholesterol of their blood, in addition to decrease ranges of fat and inflammatory markers in comparison with those that ate diets missing in these necessities.

In addition they had a considerably decrease threat of creating coronary coronary heart illness, the commonest trigger of coronary heart assaults.

(fcafotodigital/Getty Photos)

“These outcomes counsel that wholesome low-carbohydrate and low-fat diets could share frequent organic pathways that enhance cardiovascular well being,” explains Wu.

“Specializing in total weight loss plan high quality could supply flexibility for people to decide on consuming patterns that align with their preferences whereas nonetheless supporting coronary heart well being.”

YouTube Thumbnail

frameborder=”0″ enable=”accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share” referrerpolicy=”strict-origin-when-cross-origin” allowfullscreen>

The findings are based mostly on the self-reported diets of individuals, who have been all well being professionals, so they could have had greater well being consciousness and higher entry to well being care than the final inhabitants.

Associated: This Eating regimen Change Cuts Over 300 Energy a Day, With out Reducing Meal Measurement

That is considerably limiting; nevertheless, the size of follow-up within the examine is spectacular, amounting to greater than 5.2 million person-years.

The findings be a part of rising proof suggesting that consuming fewer processed meals and extra complete grains and greens is mostly greatest for a variety of well being outcomes. Strict diets that depend energy, carbs, or fat will not be obligatory.

Subscribe to ScienceAlert's free fact-checked newsletter

“This examine helps transfer the dialog past the long-standing debate over low-carbohydrate versus low-fat diets,” says Yale College heart specialist Harlan Krumholz, editor-in-chief of the Journal of the American School of Cardiology.

“The findings present that what issues most for coronary heart well being is the standard of the meals individuals eat. Whether or not a weight loss plan is decrease in carbohydrates or fats, emphasizing plant-based meals, complete grains, and wholesome fat is related to higher cardiovascular outcomes.”

The examine was revealed within the Journal of the American School of Cardiology.

Scaling Search Relevance: Augmenting App Retailer Rating with LLM-Generated Judgments

0


Massive-scale business search techniques optimize for relevance to drive profitable classes that assist customers discover what they’re in search of. To maximise relevance, we leverage two complementary targets: behavioral relevance (outcomes customers are likely to click on or obtain) and textual relevance (a end result’s semantic match to the question). A persistent problem is the shortage of expert-provided textual relevance labels relative to considerable behavioral relevance labels. We first deal with this by systematically evaluating LLM configurations, discovering {that a} specialised, fine-tuned mannequin considerably outperforms a a lot bigger pre-trained one in offering extremely related labels. Utilizing this optimum mannequin as a drive multiplier, we generate hundreds of thousands of textual relevance labels to beat the info shortage. We present that augmenting our manufacturing ranker with these textual relevance labels results in a big outward shift of the Pareto frontier: offline NDCG improves for behavioral relevance whereas concurrently growing for textual relevance. These offline good points had been validated by a worldwide A/B take a look at on the App Retailer ranker, which demonstrated a statistically vital +0.24% improve in conversion price, with essentially the most substantial efficiency good points occurring in tail queries, the place the brand new textual relevance labels present a strong sign within the absence of dependable behavioral relevance labels.

Docker AI for Agent Builders: Fashions, Instruments, and Cloud Offload



Picture by Editor

 

The Worth of Docker

 
Constructing autonomous AI programs is now not nearly prompting a big language mannequin. Fashionable brokers coordinate a number of fashions, name exterior instruments, handle reminiscence, and scale throughout heterogeneous compute environments. What determines success is not only mannequin high quality, however infrastructure design.

Agentic Docker represents a shift in how we take into consideration that infrastructure. As an alternative of treating containers as a packaging afterthought, Docker turns into the composable spine of agent programs. Fashions, software servers, GPU assets, and software logic can all be outlined declaratively, versioned, and deployed as a unified stack. The result’s transportable, reproducible AI programs that behave constantly from native growth to cloud manufacturing.

This text explores 5 infrastructure patterns that make Docker a robust basis for constructing sturdy, autonomous AI purposes.

 

1. Docker Mannequin Runner: Your Native Gateway

 
The Docker Mannequin Runner (DMR) is right for experiments. As an alternative of configuring separate inference servers for every mannequin, DMR supplies a unified, OpenAI-compatible software programming interface (API) to run fashions pulled immediately from Docker Hub. You’ll be able to prototype an agent utilizing a robust 20B-parameter mannequin domestically, then swap to a lighter, sooner mannequin for manufacturing — all by altering simply the mannequin title in your code. It turns giant language fashions (LLMs) into standardized, transportable elements.

Fundamental utilization:

# Pull a mannequin from Docker Hub
docker mannequin pull ai/smollm2

# Run a one-shot question
docker mannequin run ai/smollm2 "Clarify agentic workflows to me."

# Use it through the OpenAI Python SDK
from openai import OpenAI
consumer = OpenAI(
    base_url="http://model-runner.docker.inner/engines/llama.cpp/v1",
    api_key="not-needed"
)

 

2. Defining AI Fashions in Docker Compose

 
Fashionable brokers typically use a number of fashions, resembling one for reasoning and one other for embeddings. Docker Compose now permits you to outline these fashions as top-level companies in your compose.yml file, making your whole agent stack — enterprise logic, APIs, and AI fashions — a single deployable unit.

This helps you convey infrastructure-as-code ideas to AI. You’ll be able to version-control your full agent structure and spin it up wherever with a single docker compose up command.

 

3. Docker Offload: Cloud Energy, Native Expertise

 
Coaching or working giant fashions can soften your native {hardware}. Docker Offload solves this by transparently working particular containers on cloud graphics processing items (GPUs) immediately out of your native Docker atmosphere.

This helps you develop and take a look at brokers with heavyweight fashions utilizing a cloud-backed container, with out studying a brand new cloud API or managing distant servers. Your workflow stays totally native, however the execution is highly effective and scalable.

 

4. Mannequin Context Protocol Servers: Agent Instruments

 
An agent is barely nearly as good because the instruments it may well use. The Mannequin Context Protocol (MCP) is an rising customary for offering instruments (e.g. search, databases, or inner APIs) to LLMs. Docker’s ecosystem features a catalogue of pre-built MCP servers which you could combine as containers.

As an alternative of writing customized integrations for each software, you need to use a pre-made MCP server for PostgreSQL, Slack, or Google Search. This allows you to give attention to the agent’s reasoning logic slightly than the plumbing.

 

5. GPU-Optimized Base Pictures for Customized Work

 
When that you must fine-tune a mannequin or run customized inference logic, ranging from a well-configured base picture is important. Official pictures like PyTorch or TensorFlow include CUDA, cuDNN, and different necessities pre-installed for GPU acceleration. These pictures present a secure, performant, and reproducible basis. You’ll be able to prolong them with your personal code and dependencies, making certain your customized coaching or inference pipeline runs identically in growth and manufacturing.

 

Placing It All Collectively

 
The true energy lies in composing these parts. Under is a primary docker-compose.yml file that defines an agent software with an area LLM, a software server, and the power to dump heavy processing.

companies:
  # our customized agent software
  agent-app:
    construct: ./app
    depends_on:
      - model-server
      - tools-server
    atmosphere:
      LLM_ENDPOINT: http://model-server:8080
      TOOLS_ENDPOINT: http://tools-server:8081

  # A neighborhood LLM service powered by Docker Mannequin Runner
  model-server:
    picture: ai/smollm2:newest # Makes use of a DMR-compatible picture
    platform: linux/amd64
    # Deploy configuration might instruct Docker to dump this service
    deploy:
      assets:
        reservations:
          gadgets:
            - driver: nvidia
              depend: all
              capabilities: [gpu]

  # An MCP server offering instruments (e.g. net search, calculator)
  tools-server:
    picture: mcp/server-search:newest
    atmosphere:
      SEARCH_API_KEY: ${SEARCH_API_KEY}

# Outline the LLM mannequin as a top-level useful resource (requires Docker Compose v2.38+)
fashions:
  smollm2:
    mannequin: ai/smollm2
    context_size: 4096

 

This instance illustrates how companies are linked.

 

Observe: The precise syntax for offload and mannequin definitions is evolving. All the time test the newest Docker AI documentation for implementation particulars.

 

Agentic programs demand greater than intelligent prompts. They require reproducible environments, modular software integration, scalable compute, and clear separation between elements. Docker supplies a cohesive technique to deal with each a part of an agent system — from the massive language mannequin to the software server — as a transportable, composable unit.

By experimenting domestically with Docker Mannequin Runner, defining full stacks with Docker Compose, offloading heavy workloads to cloud GPUs, and integrating instruments by standardized servers, you identify a repeatable infrastructure sample for autonomous AI.

Whether or not you might be constructing with LangChain or CrewAI, the underlying container technique stays constant. When infrastructure turns into declarative and transportable, you may focus much less on atmosphere friction and extra on designing clever habits.
 
 

Shittu Olumide is a software program engineer and technical author enthusiastic about leveraging cutting-edge applied sciences to craft compelling narratives, with a eager eye for element and a knack for simplifying complicated ideas. You may also discover Shittu on Twitter.



Do not buy a brand new cellphone only for gaming, last-gen flagships nonetheless carry out simply as properly

0


Robert Triggs / Android Authority

The most recent flagship Qualcomm Snapdragon 8 Elite Gen 5 chipset is blazing quick in benchmarks, and with extra smartphones outfitted with the chip now hitting the market, we’re beginning to see whether or not these purported positive aspects maintain up in real-world workloads.

Wanting particularly at graphics and gaming, Qualcomm claims 23% higher graphics efficiency and as much as 20% decrease energy consumption versus final yr. Benchmark outcomes again this up; this yr’s flagship telephones fly previous their predecessors, suggesting a very next-gen expertise even over fashions which might be solely a yr outdated. Nevertheless, some telephones have confirmed fairly scorching underneath strain, leaving many people questioning if we’re hitting the ceiling of graphics efficiency in a compact cell type issue.

To see if next-gen efficiency is extra than simply numbers on theoretical checks, I grabbed the brand new Xiaomi 17 Extremely, full with the 8 Elite Gen 5, and final yr’s 15 Extremely with the unique Snapdragon 8 Elite for a within-brand comparability. I up to date each telephones to their newest variations and began putting in a few of Android’s hottest video games.

Do you suppose you want the most recent processor to play video games easily?

20 votes

Excessive-end gaming check

Snapdragon 8 Elite Gen 5 Gaming Test FPS light

Robert Triggs / Android Authority

To place the telephones by means of their paces, I’ve caught with a small number of standard video games that may nonetheless put handsets by means of some severe stress. We’ve got COD Cell Battle Royal mode with medium graphics to unlock the 120fps potential ceiling, Asphalt Legends with maxed-out bells and whistles and a 120fps cap (up from 60fps on some older telephones), and Genshin Impression at most settings with a decrease 60fps restrict however a wider open-world setting to emphasize the GPU.

Beginning with COD Cell, each telephones can comfortably hit 120fps with out problem. Whereas there are occasional microstutters on each fashions, they solely dip to 80 fps at worst and are spaced out, so they’re nothing to be involved about. That mentioned, I did witness sustained drops for a number of seconds on each fashions. This seems to be a background-, temperature-, or power-throttling characteristic, because it occurs persistently as soon as the telephones attain 40°C. Fortunately, that’s as scorching because the telephones get, and that takes a few back-to-back rounds to succeed in, so about 20-Half-hour of playtime. Each fashions are equally weak to temperature throttling when operating demanding workloads, however neither heats up significantly rapidly on this sport.

Even with 120fps caps, the distinction between the Elite and Elite Gen 5 is small.

Genshin Impression additionally exhibits the 2 processors neck and neck, simply dealing with 60 fps with graphics cranked as much as most. Neither cellphone turns into too scorching right here both. Each peak at an inexpensive 35°C, suggesting a comparatively straightforward workload, that means it’s unlikely both would throttle after prolonged play.

It’s not till we get to Asphalt Legends that we see a discrepancy. I triple checked that I set each telephones to 120fps within the settings, however the Xiaomi 15 Extremely remains to be capped at 60fps. This appears arbitrary, because the cellphone averages 60fps and will undoubtedly hit the next body price if allowed to. That mentioned, the decrease fifth percentile of frames suggests the older chip is rather less succesful right here. It’s commonplace for video games to lock graphical options to sure chipsets, and generally this cuts each methods, with cutting-edge chips lacking out initially. So I don’t wish to bash the Snapdragon 8 Elite too onerous right here, as that is unusual for flagship chips, but it surely’s value noting that restricted choices do occur.

Emulation efficiency check

Snapdragon 8 Elite Gen 5 Emulation Test FPS light

Robert Triggs / Android Authority

With top-tier Android video games producing very related body charges, I turned to emulation checks to attempt to pry them aside. I resorted to the demanding PlayStation 2 emulator NetherSX, enjoying Want For Velocity: Most Needed at 3x native decision, together with Dolphin and Mario Kart Wii at 4x native decision, each utilizing OpenGL (which is often safer however a bit slower than Vulkan).

Drivers may be extra of an element right here than Android video games, in order that’s value maintaining in thoughts earlier than we dive in. Nevertheless, it’s been a number of months because the 8 Elite Gen 5 arrived on the scene, so it must be performing fairly properly by now. Or not less than you’d suppose — I needed to disable multi-core pace up in Mario Kart Wii on the Xiaomi 17 Extremely to keep away from graphics synchronization errors.

Future 8 Elite Gen 5 drivers would possibly assist with emulation, however efficiency may be very shut immediately.

Fortunately, that didn’t seem to have an effect on efficiency. Each telephones come fairly near locking a rock-solid 60 fps in each titles. The 2 have pretty sturdy fifth percentile body charges as properly, indicating little in the way in which of normal jank or dropped frames. Nevertheless, trying on the full timeline, the Xiaomi 17 Extremely and its Snapdragon 8 Elite Gen 5 are marginally smoother, although I struggled to note any significant distinction throughout play.

One factor that the 8 Elite Gen 5 has going for it’s decrease energy consumption. It averaged 5.4W in NFS and 5.0W in Mario Kart, in comparison with 7.6W and 5.6W for the last-gen cellphone. There are some variables right here, after all, however that is fairly a big discrepancy, suggesting the newer cellphone has to work much less onerous to attain the identical body charges. Nevertheless, each telephones additionally noticed excessive energy draw above 12W that coincided with very uncommon, quick, however noticeable CPU spikes. It’s not clear to me that these are straight associated to gaming, a background Xiaomi course of, or one thing else altogether. I’ll save drawing too many conclusions right here.

Do you have to purchase a next-gen cellphone for gaming?

OnePlus 15 165 fps gaming

Tushar Mehta / Android Authority

Total, then, there’s little or no efficiency distinction between the Xiaomi 17 Extremely and its Snapdragon 8 Elite Gen 5 processor and final yr’s Xiaomi 15 Extremely and its Snapdragon 8 Elite in relation to standard Android video games and emulators. Granted, there are just a few edge circumstances the place some video games assist completely different options or settings, and a few emulator choices could have to be tweaked barely, however total, you’ll see very related body charges on both mannequin.

Whereas few of us would think about upgrading our telephones after only a single era, hopefully this knowledge helps affirm that you simply don’t even have to leap to the most recent handsets to obtain primarily equal efficiency — not less than so far as gaming is anxious. You possibly can usually decide up last-generation handsets at a fraction of the worth of their marginally newer counterparts and nonetheless profit from blisteringly quick, sturdy body charges each bit nearly as good as immediately’s greatest, and costliest. It simply reinforces my perception that smartphones don’t want extra energy; they want cheaper chips.

Actual-world sport checks are nearer in efficiency than benchmarks recommend.

In fact, the Snapdragon 8 Elite Gen 5 offers you a bit bit extra future-proofing, guaranteeing you’ll be enjoying upcoming video games at excessive settings for years to come back. There are sometimes different upgrades on provide, similar to improved cameras and new AI options, that may nonetheless make shopping for the perfect worthwhile. However should you’re available in the market for handsets that carry out brilliantly and may deal with the most recent video games, properly, there’s not an enormous quantity of distinction between this gen and final gen.

Don’t wish to miss the most effective from Android Authority?

google preferred source badge light@2xgoogle preferred source badge dark@2x

Thanks for being a part of our neighborhood. Learn our Remark Coverage earlier than posting.