Symbiotic micro organism reside inside specialised organs known as bacteriomes inside bugs. This picture exhibits a cross-section of the planthopper Callodictya krueperi, with fluorescent probes labelling three microbes: Vidania (purple), Sodalis (yellow) and Sulcia (inexperienced)
Courtesy Anna Michalik et al
Symbiotic micro organism residing inside insect cells have the smallest genomes identified for any organism. The findings additional muddy the excellence between mobile organelles like mitochondria and essentially the most barebones microbes in nature.
“Precisely the place this extremely built-in symbiont ends and an organelle begins, I believe it’s very tough to say,” says Piotr Łukasik at Jagiellonian College in Kraków, Poland. “This can be a very blurred boundary.”
Planthoppers are bugs that subsist fully on plant sap, and complement their vitamin because of an historic relationship with symbiotic micro organism. Over many hundreds of thousands of years, these microbes advanced to reside inside specialised cells within the planthoppers’ abdomens, producing vitamins that the planthoppers can’t get from their sugary weight loss plan. Many of those micro organism are completely depending on their hosts and have let their genetic toolkits deteriorate to a fraction of their ancestral dimension.
Łukasik and his colleagues had been within the evolution of this bacteria-bug relationship and simply how small these bacterial genomes might get. The staff sampled 149 particular person bugs throughout 19 planthopper households, extracting DNA from the bugs’ stomach tissues. The researchers analysed and sequenced the DNA, reconstructing the genomes of the symbiotic micro organism Vidania and Sulcia.
The bacterial genomes had been exceptionally tiny. Genome size could be measured in numbers of base pairs, the sequence of paired “letters” in genetic code. The bacterial genomes had been lower than 181,000 base pairs lengthy. For comparability, the human genome is billions of base pairs lengthy.
A few of the Vidania genomes had been simply 50,000 base pairs lengthy, the smallest identified for any life kind. Beforehand, the smallest was Nasuia, a symbiotic bacterium hosted by planthopper relations known as leafhoppers, measuring simply over 100,000 base pairs.
At 50,000 base pairs, the Vidania genomes are on the dimensions of these present in viruses, which aren’t thought of to be alive: the virus behind covid-19 has a genome round 30,000 base pairs lengthy, as an example. A few of the Vidania have solely about 60 protein-coding genes, among the many lowest counts on report.
Planthoppers depend on symbiotic micro organism to complement their specialised diets
Courtesy Anna Michalik et al
The micro organism have been evolving with their insect hosts for about 263 million years, independently evolving extraordinarily small genome sizes inside two completely different teams of planthoppers. One of many few issues these micro organism do is produce the amino acid phenylalanine, which is a chemical precursor for making and strengthening insect exoskeletons.
Łukasik and his staff assume that the huge lack of genes would possibly occur when the bugs eat new meals with vitamins that was equipped by the micro organism, or when extra microbes transfer in and take over these roles.
The extremely decreased micro organism are harking back to mitochondria and chloroplasts – energy-producing organelles inside animal and plant cells descended from historic micro organism. The symbiotic micro organism equally reside throughout the host cells and are handed down between generations.
“‘Organelle’ is only a phrase, so it’s high-quality with me to name these organelles if somebody desires to incorporate these within the definition,” says Nancy Moran on the College of Texas at Austin, who was not concerned with the analysis. “However there stay variations from mitochondria or chloroplasts.”
Mitochondria are a lot older, having arisen 1.5 billion years in the past or extra, and their genomes are smaller nonetheless – about 15,000 base pairs.
“These symbionts reside solely in specialised host cells, not in most cells all through the organism, as seen in mitochondria and chloroplasts,” says Moran.
Łukasik considers these micro organism and mitochondria as merely being at completely different locations on an evolutionary “gradient of dependence” on their hosts. He suspects even tinier symbiote genomes have but to be found.
Constructing cohesive and unified buyer intelligence throughout your group begins with decreasing the friction your gross sales representatives face when toggling between Salesforce, assist tickets, and Amazon Redshift. A gross sales consultant making ready for a buyer assembly would possibly spend hours clicking by way of a number of totally different dashboards—product suggestions, engagement metrics, income analytics, and so on. – earlier than creating an entire image of the shopper’s scenario. At AWS, our gross sales group skilled this firsthand as we scaled globally. We would have liked a approach to unify siloed buyer information throughout metrics databases, doc repositories, and exterior business sources – with out constructing advanced customized orchestration infrastructure.
We constructed the Buyer Agent & Data Engine (CAKE), a buyer centric chat agent utilizing Amazon Bedrock AgentCore to unravel this problem. CAKE coordinates specialised retriever instruments – querying information graphs in Amazon Neptune, metrics in Amazon DynamoDB, paperwork in Amazon OpenSearch Service, and exterior market information utilizing an internet search API, together with safety enforcement utilizing Row Degree Safety device (RLS), delivering buyer insights by way of pure language queries in underneath 10 seconds (as noticed in agent load assessments).
On this submit, we display how one can construct unified intelligence techniques utilizing Amazon Bedrock AgentCore by way of our real-world implementation of CAKE. You’ll be able to construct customized brokers that unlock the next options and advantages:
Coordination of specialised instruments by way of dynamic intent evaluation and parallel execution
Integration of purpose-built information shops (Neptune, DynamoDB, OpenSearch Service) with parallel orchestration
Implementation of row-level safety and governance inside workflows
Manufacturing engineering practices for reliability, together with template-based reporting to stick to enterprise semantic and magnificence
Efficiency optimization by way of mannequin flexibility
These architectural patterns can assist you speed up improvement for various use circumstances, together with buyer intelligence techniques, enterprise AI assistants, or multi-agent techniques that coordinate throughout totally different information sources.
As gross sales organizations scale globally, they usually face three crucial challenges: fragmented information throughout specialised instruments (product suggestions, engagement dashboards, income analytics, and so on.) requiring hours to collect complete buyer views, lack of enterprise semantics in conventional databases that may’t seize semantic relationships explaining why metrics matter, and guide consolidation processes that may’t scale with rising information volumes. You want a unified system that may combination buyer information, perceive semantic relationships, and motive by way of buyer wants in enterprise context, making CAKE the important linchpin for enterprises in all places.
Answer overview
CAKE is a customer-centric chat agent that transforms fragmented information into unified, actionable intelligence. By consolidating inner and exterior information sources/tables right into a single conversational endpoint, CAKE delivers customized buyer insights powered by context-rich information graphs—all in underneath 10 seconds. In contrast to conventional instruments that merely report numbers, the semantic basis of CAKE captures the that means and relationships between enterprise metrics, buyer behaviors, business dynamics, and strategic contexts. This permits CAKE to clarify not simply what is occurring with a buyer, however why it’s occurring and how one can act.
Amazon Bedrock AgentCore gives the runtime infrastructure that multi-agent AI techniques require as a managed service, together with inter-agent communication, parallel execution, dialog state monitoring, and power routing. This helps groups give attention to defining agent behaviors and enterprise logic reasonably than implementing distributed techniques infrastructure.
For CAKE, we constructed a customized agent on Amazon Bedrock AgentCore that coordinates 5 specialised instruments, every optimized for various information entry patterns:
Neptune retriever device for graph relationship queries
DynamoDB agent for fast metric lookups
OpenSearch retriever device for semantic doc search
Net search device for exterior business intelligence
Row degree safety (RLS) device for safety enforcement
The next diagram exhibits how Amazon Bedrock AgentCore helps the orchestration of those elements.
The answer flows by way of a number of key phases in response to a query (for instance, “What are the highest enlargement alternatives for this buyer?”):
Analyzes intent and routes the question – The supervisor agent, working on Amazon Bedrock AgentCore, analyzes the pure language question to find out its intent. The query requires buyer understanding, relationship information, utilization metrics, and strategic insights. The agent’s tool-calling logic, utilizing Amazon Bedrock AgentCore Runtime, identifies which specialised instruments to activate.
Dispatches instruments in parallel – Moderately than executing device calls sequentially, the orchestration layer dispatches a number of retriever instruments in parallel, utilizing the scalable execution setting of Amazon Bedrock AgentCore Runtime. The agent manages the execution lifecycle, dealing with timeouts, retries, and error circumstances routinely.
Synthesizes a number of outcomes – As specialised instruments return outcomes, Amazon Bedrock AgentCore streams these partial responses to the supervisor agent, which synthesizes them right into a coherent reply. The agent causes about how totally different information sources relate to one another, identifies patterns, and generates insights that span a number of information domains.
Enforces safety boundaries – Earlier than information retrieval begins, the agent invokes the RLS device to deterministically implement consumer permissions. The customized agent then verifies that subsequent device calls respect these safety boundaries, routinely filtering outcomes and serving to forestall unauthorized information entry. This safety layer operates on the infrastructure degree, decreasing the danger of implementation errors.
This structure operates on two parallel tracks: Amazon Bedrock AgentCore gives the runtime for the real-time serving layer that responds to consumer queries with minimal latency, and an offline information pipeline periodically refreshes the underlying information shops from the analytical information warehouse. Within the following sections, we talk about the agent framework design and core answer elements, together with the information graph, information shops, and information pipeline.
Agent framework design
Our multi-agent system leverages the AWS Strands Brokers framework to ship structured reasoning capabilities whereas sustaining the enterprise controls required for regulatory compliance and predictable efficiency. The multi-agent system is constructed on the AWS Strands Brokers framework, which gives a model-driven basis for constructing brokers from many alternative fashions. The supervisor agent analyzes incoming inquiries to intelligently choose which specialised brokers and instruments to invoke and how one can decompose consumer queries. The framework exposes agent states and outputs to implement decentralized analysis at each agent and supervisor ranges. Constructing on model-driven strategy, we implement agentic reasoning by way of GraphRAG reasoning chains that assemble deterministic inference paths by traversing information relationships. Our brokers carry out autonomous reasoning inside their specialised domains, grounded round pre-defined ontologies whereas sustaining predictable, auditable habits patterns required for enterprise functions.
The supervisor agent employs a multi-phase choice protocol:
Query evaluation – Parse and perceive consumer intent
Supply choice – Clever routing determines which mixture of instruments are wanted
Question decomposition – Authentic questions are damaged down into specialised sub-questions optimized for every chosen device
Parallel execution – Chosen instruments execute concurrently by way of serverless AWS Lambda motion teams
Instruments are uncovered by way of a hierarchical composition sample (accounting for information modality—structured vs. unstructured) the place high-level brokers and instruments coordinate a number of specialised sub-tools:
Graph reasoning device – Manages entity traversal, relationship evaluation, and information extraction
Buyer insights agent – Coordinates a number of fine-tuned fashions in parallel for producing buyer summaries from tables
Net analysis device – Coordinates internet/information retrieval
We prolong the core AWS Strands Brokers framework with enterprise-grade capabilities together with buyer entry validation, token optimization, multi-hop LLM choice for mannequin throttling resilience, and structured GraphRAG reasoning chains. These extensions ship the autonomous decision-making capabilities of contemporary agentic techniques whereas facilitating predictable efficiency and regulatory compliance alignment.
Constructing the information graph basis
CAKE’s information graph in Neptune represents buyer relationships, product utilization patterns, and business dynamics in a structured format that empowers AI brokers to carry out environment friendly reasoning. In contrast to conventional databases that retailer info in isolation, CAKE’s information graph captures the semantic that means of enterprise entities and their relationships.
Graph building and entity modeling
We designed the information graph round AWS gross sales ontology—the core entities and relationships that gross sales groups talk about every day:
Buyer entities – With properties extracted from information sources together with business classifications, income metrics, cloud adoption section, and engagement scores
Product entities – Representing AWS companies, with connections to make use of circumstances, business functions, and buyer adoption patterns
Answer entities – Linking merchandise to enterprise outcomes and strategic initiatives
Alternative entities – Monitoring gross sales pipeline, deal levels, and related stakeholders
Amazon Neptune excels at answering questions that require understanding connections—discovering how two entities are associated, figuring out paths between accounts, or discovering oblique relationships that span a number of hops. The offline information building course of runs scheduled queries towards Redshift clusters to organize information to be loaded within the graph.
Capturing relationship context
CAKE’s information graph captures how relationships join entities. When the graph connects a buyer to a product by way of an elevated utilization relationship, it additionally shops contextual attributes: the speed of improve, the enterprise driver (from account plans), and associated product adoption patterns. This contextual richness helps the LLM perceive enterprise context and supply explanations grounded in precise relationships reasonably than statistical correlation alone.
Goal-built information shops
Moderately than storing information in a single database, CAKE makes use of specialised information shops, every designed for the way it will get queried. Our customized agent, working on Amazon Bedrock AgentCore, manages the coordination throughout these shops—sending queries to the correct database, working them on the identical time, and mixing outcomes—so each customers and builders work with what seems like a single information supply:
Neptune for graph relationships – Neptune shops the net of connections between prospects, accounts, stakeholders, and organizational entities. Neptune excels at multi-hop traversal queries that require costly joins in relational databases—discovering relationship paths between disconnected accounts, or discovering prospects in an business who’ve adopted particular AWS companies. When Amazon Bedrock AgentCore identifies a question requiring relationship reasoning, it routinely routes to the Neptune retriever device.
DynamoDB for fast metrics – DynamoDB operates as a key-value retailer for precomputed aggregations. Moderately than computing buyer well being scores or engagement metrics on-demand, the offline pipeline pre-computes these values and shops them listed by buyer ID. DynamoDB then delivers sub-10ms lookups, enabling on the spot report technology. Instrument chaining in Amazon Bedrock AgentCore permits it to retrieve metrics from DynamoDB, move them to the magnifAI agent (our customized table-to-text agent) for formatting, and return polished experiences—all with out customized integration code.
OpenSearch Service for semantic doc search – OpenSearch Service shops unstructured content material like account plans and subject notes. Utilizing embedding fashions, OpenSearch Service converts textual content into vector representations that assist semantic matching. When Amazon Bedrock AgentCore receives a question about “digital transformation,” for instance, it acknowledges the necessity for semantic search and routinely routes to the OpenSearch Service retriever device, which finds related passages even when paperwork use totally different terminology.
S3 for doc storage – Amazon Easy Storage Service (Amazon S3) gives the muse for OpenSearch Service. Account plans are saved as Parquet information in Amazon S3 earlier than being listed as a result of the supply warehouse (Amazon Redshift) has truncation limits that may reduce off massive paperwork. This multi-step course of—Amazon S3 storage, embedding technology, OpenSearch Service indexing—preserves full content material whereas sustaining the low latency required for real-time queries.
Constructing on Amazon Bedrock AgentCore makes these multi-database queries really feel like a single, unified information supply. When a question requires buyer relationships from Neptune, metrics from DynamoDB, and doc context from OpenSearch Service, our agent routinely dispatches requests to all three in parallel, manages their execution, and synthesizes their outcomes right into a single coherent response.
Information pipeline and steady refresh
The CAKE offline information pipeline operates as a batch course of that runs on a scheduled cadence to maintain the serving layer synchronized with the most recent enterprise information. The pipeline structure separates information building from information serving, so the real-time question layer can keep low latency whereas the batch pipeline handles computationally intensive aggregations and graph building.
The Information Processing Orchestration layer coordinates transformations throughout a number of goal databases. For every database, the pipeline performs the next steps:
Extracts related information from Amazon Redshift utilizing optimized queries
Applies enterprise logic transformations particular to every information retailer’s necessities
Hundreds processed information into the goal database with acceptable indexes and partitioning
For Neptune, this entails extracting entity information, establishing graph nodes and edges with property attributes, and loading the graph construction with semantic relationship sorts. For DynamoDB, the pipeline computes aggregations and metrics, buildings information as key-value pairs optimized for buyer ID lookups, and applies atomic updates to take care of consistency. For OpenSearch Service, the pipeline follows a specialised path: massive paperwork are first exported from Amazon Redshift to Amazon S3 as Parquet information, then processed by way of embedding fashions to generate vector representations, that are lastly loaded into the OpenSearch Service index with acceptable metadata for filtering and retrieval.
Engineering for manufacturing: Reliability and accuracy
When transitioning CAKE from prototype to manufacturing, we carried out a number of crucial engineering practices to facilitate reliability, accuracy, and belief in AI-generated insights.
Mannequin flexibility
The Amazon Bedrock AgentCore structure decouples the orchestration layer from the underlying LLM, permitting versatile mannequin choice. We carried out mannequin hopping to supply automated fallback to different fashions when throttling happens. This resilience occurs transparently inside AgentCore’s Runtime—detecting throttling circumstances, routing requests to obtainable fashions, and sustaining response high quality with out user-visible degradation.
Row-Degree Safety (RLS) and Information Governance
Earlier than information retrieval happens, the RLS device enforces row-level safety based mostly on consumer identification and organizational hierarchy. This safety layer operates transparently to customers whereas sustaining strict information governance:
Gross sales representatives entry solely prospects assigned to their territories
Regional managers view aggregated information throughout their areas
Executives have broader visibility aligned with their obligations
The RLS device routes queries to acceptable information partitions and applies filters on the database question degree, so safety could be enforced within the information layer reasonably than counting on application-level filtering.
Outcomes and influence
CAKE has remodeled how AWS gross sales groups entry and act on buyer intelligence. By offering on the spot entry to unified insights by way of pure language queries, CAKE reduces the time spent looking for info from hours to seconds as per surveys/suggestions from customers, serving to gross sales representatives give attention to strategic buyer engagement reasonably than information gathering.
The multi-agent structure delivers question responses in seconds for many queries, with the parallel execution mannequin supporting simultaneous information retrieval from a number of sources. The information graph allows refined reasoning that goes past easy information aggregation—CAKE explains why tendencies happen, identifies patterns throughout seemingly unrelated information factors, and generates suggestions grounded in enterprise relationships. Maybe most significantly, CAKE democratizes entry to buyer intelligence throughout the group. Gross sales representatives, account managers, options architects, and executives work together with the identical unified system, offering constant buyer insights whereas sustaining acceptable safety and entry controls.
Conclusion
On this submit, we confirmed how Amazon Bedrock AgentCore helps CAKE’s multi-agent structure. Constructing multi-agent AI techniques historically requires vital infrastructure funding, together with implementing customized agent coordination protocols, managing parallel execution frameworks, monitoring dialog state, dealing with failure modes, and constructing safety enforcement layers. Amazon Bedrock AgentCore reduces this undifferentiated heavy lifting by offering these capabilities as managed companies inside Amazon Bedrock.
Amazon Bedrock AgentCore gives the runtime infrastructure for orchestration, and specialised information shops excel at their particular entry patterns. Neptune handles relationship traversal, DynamoDB gives on the spot metric lookups, and OpenSearch Service helps semantic doc search, however our customized agent, constructed on Amazon Bedrock AgentCore, coordinates these elements, routinely routing queries to the correct instruments, executing them in parallel, synthesizing their outcomes, and sustaining safety boundaries all through the workflow. The CAKE expertise demonstrates how Amazon Bedrock AgentCore can assist groups construct multi-agent AI techniques, rushing up the method from months of infrastructure improvement to weeks of enterprise logic implementation. By offering orchestration infrastructure as a managed service, Amazon Bedrock AgentCore helps groups give attention to area experience and buyer worth reasonably than constructing distributed techniques infrastructure from scratch.
We prolong our honest gratitude to our government sponsors and mentors whose imaginative and prescient and steerage made this initiative doable: Aizaz Manzar, Director of AWS International Gross sales; Ali Imam, Head of Startup Phase; and Akhand Singh, Head of Information Engineering.
We additionally thank the devoted group members whose technical experience and contributions have been instrumental in bringing this product to life: Aswin Palliyali Venugopalan, Software program Dev Supervisor; Alok Singh, Senior Software program Growth Engineer; Muruga Manoj Gnanakrishnan, Principal Information Engineer; Sai Meka, Machine Studying Engineer; Invoice Tran, Information Engineer; and Rui Li, Utilized Scientist.
Concerning the authors
Monica Jain is a Senior Technical Product Supervisor at AWS International Gross sales and an analytics skilled driving AI-powered gross sales intelligence at scale. She leads the event of generative AI and ML-powered information merchandise—together with information graphs, AI-augmented analytics, pure language question techniques, and suggestion engines, that enhance vendor productiveness and decision-making. Her work allows AWS executives and sellers worldwide to entry real-time insights and speed up data-driven buyer engagement and income development.
M. Umar Javed is a Senior Utilized Scientist at AWS, with over 8 years of expertise throughout academia and business and a PhD in ML principle. At AWS, he builds production-grade generative AI and machine studying options, with work spanning multi-agent LLM architectures, analysis on small language fashions, information graphs, suggestion techniques, reinforcement studying, and multi-modal deep studying. Previous to AWS, Umar contributed to ML analysis at NREL, CISCO, Oxford, and UCSD. He’s a recipient of the ECEE Excellence Award (2021) and contributed to 2 Donald P. Eckman Awards (2021, 2023).
Damien Forthomme is a Senior Utilized Scientist at AWS, main a Information Science group in AWS Gross sales, Advertising, and International Providers (SMGS). With greater than 10 years of expertise and a PhD in Physics, he focuses on utilizing and constructing superior machine studying and generative AI instruments to floor the correct information to the correct individuals on the proper time. His work encompasses initiatives comparable to forecasting, suggestion techniques, core foundational datasets creation, and constructing generative AI merchandise that improve gross sales productiveness for the group.
Mihir Gadgil is a Senior Information Engineer in AWS Gross sales, Advertising, and International Providers (SMGS), specializing in enterprise-scale information options and generative AI functions. With over 9 years of expertise and a Grasp’s in Data Know-how & Administration, he focuses on constructing strong information pipelines, advanced information modeling, and ETL/ELT processes. His experience drives enterprise transformation by way of modern information engineering options and superior analytics capabilities.
Sujit Narapareddy, Head of Information & Analytics at AWS International Gross sales, is a know-how chief driving international enterprise transformation. He leads information product and platform groups that energy the AWS’s Go-to-Market by way of AI-augmented analytics and clever automation. With a confirmed observe document in enterprise options, he has remodeled gross sales productiveness, information governance, and operational excellence. Beforehand at JPMorgan Chase Enterprise Banking, he formed next-generation FinTech capabilities by way of information innovation.
Norman Braddock, Senior Supervisor of AI Product Administration at AWS, is a product chief driving the transformation of enterprise intelligence by way of agentic AI. He leads the Analytics & Insights Product Administration group inside Gross sales, Advertising, and International Providers (SMGS), delivering merchandise that bridge AI mannequin efficiency with measurable enterprise influence. With a background spanning procurement, manufacturing, and gross sales operations, he combines deep operational experience with product innovation to form the way forward for autonomous enterprise administration.
It is no secret that app builders appear to pay extra consideration to Apple platforms than Android or Home windows. The checklist of iPhone-only apps that I lengthy for on Android is brief, however notable. If I needed to slim it down to only two iOS apps I might love to make use of on my Android units, note-taking app Notability and journey app Flighty can be on the high of my checklist. It should get shorter in April, as a result of Notability is lastly getting an Android model.
Notability received a significant improve simply final week that helped it inch nearer to changing into a very cross-platform notes app. It gained an online shopper, that means you could entry Notability notes on any machine with a browser, together with on Android telephones. The net shopper helps each staple Notability function, reminiscent of stay recordings and transcripts, file uploads and modifying, and markup instruments. Utilizing the Notability Cloud sync perform, notes created within the iOS, iPadOS, or macOS apps shall be accessible on the net shopper, and vice versa.
It is the closest factor to true Android help Notability has ever had, however a real Google Play Retailer model is coming quickly. The corporate confirmed in a press launch final week that Notability will attain Android beta testers in April 2026, offering an early have a look at the app’s consumer interface and options. Crucially, the upcoming app is not simply an iOS clone — it can have Android-specific options, like lock display word shortcuts.
I’ve tried each common Android note-taking app, from Google Preserve and Samsung Notes to Notion and Evernote, however none have matched the function set and UI supplied by Notability. That is why I will be one of many first to attempt Notability on Android when it arrives this spring, and it’s best to take into account giving it a attempt, too.
I discover fundamental apps like Google Preserve and Samsung Notes to be far too restricted for my wants, particularly on tablets. It is tremendous for typing out a fleeting thought in your Android telephone or sustaining a grocery checklist, however something greater than that can make you would like you used a fully-featured notes apps. On the opposite finish of the spectrum are apps like Notion, that are too highly effective and complicated for my use circumstances. I’ve at all times thought Notability struck a terrific stability between these two extremes.
I first bought Notability a decade in the past, and it helped me get via my highschool and school research. Since then, I’ve used Notability to take notes throughout press conferences and interviews. Sadly, the one-time buy possibility I used to purchase Notability within the first place is not accessible. Notability now makes use of a subscription mannequin, nevertheless it additionally added a free model for the primary time as a part of the swap.
An early have a look at the Notability for Android consumer interface. (Picture credit score: Notability)
Notability’s greatest function is audio recording, transcription, and sync. When handwriting or typing in Notability, you’ll be able to report audio alongside the notes you take. The audio is synced to your actions within the Notability app. While you playback the audio recording, you’ll be able to faucet a phrase or drawing in your digital pocket book and listen to precisely what was mentioned once you wrote it. This turns out to be useful when revisiting notes from conferences or lectures, as you’ll be able to rehear what the speaker or professor mentioned within the second to refresh your reminiscence.
Get the newest information from Android Central, your trusted companion on this planet of Android
One other worthy function is form and handwriting recognition, which is fairly self explanatory. While you draw a form, the Notability software program can acknowledge and snap it into excellent type. If in case you have horrible sketching abilities or poor handwriting like me, this turns out to be useful. The app can even acknowledge your handwriting and convert it into textual content whereas providing full help for a stylus, mouse, and keyboard. There’s doc import help, which I exploit as an alternative of different markup instruments like Adobe Acrobat.
You’ll be able to join the Notability for Android waitlist now
You’ll be able to join the beta model of Notability now on the app’s web site. The app will help all of the aforementioned options whereas in beta, however extra are on the way in which. Notability is teasing a common launch with “new AI-powered options via Notability Study, notes export, help for the Notability Gallery, a phone-optimized model, and extra superior drawing instruments.”
Should you love productiveness apps or note-taking instruments, it is value signing up for the Notability for Android beta. In order for you an early have a look at the expertise, you’ll be able to attempt the Notability net shopper totally free as we speak. It is the most effective iOS, iPadOS, and macOS apps ever made, and it is nice that Android customers will quickly get to attempt it.
A global group of researchers, together with scientists from the College of Wollongong (UOW), has uncovered sturdy proof that shifting local weather situations contributed to the disappearance of Homo floresiensis, the small-bodied human species typically referred to as the hobbits. The findings, revealed in Communications Earth & Atmosphere, point out that these early people left Liang Bua, a cave they’d occupied for roughly 140,000 years, throughout a chronic drought that stretched throughout 1000’s of years.
To piece collectively what occurred, researchers analyzed chemical indicators preserved in cave stalagmites together with isotopic knowledge from fossilized enamel belonging to a pygmy elephant species (Stegodon florensis insularis) that the hobbits hunted. The information level to a protracted drying pattern that started about 76,000 years in the past and intensified right into a extreme drought between 61,000 and 55,000 years in the past. That harsh interval aligns carefully with the time Homo floresiensis vanished. Prolonged drought and rising competitors for restricted meals and water probably pushed them out of Liang Bua and will have finally led to their extinction.
The findings underscore how highly effective environmental shifts will be in figuring out whether or not a species survives or disappears. On this case, declining rainfall seems to have reshaped the ecosystem that sustained these historical people.
“The ecosystem round Liang Bua grew to become dramatically drier across the time Homo floresiensis vanished,” stated UOW Honorary Professor Dr. Mike Gagan, the lead writer of the examine. “Summer time rainfall fell and river-beds grew to become seasonally dry, inserting stress on each hobbits and their prey.”
Liang Bua Cave and the Hobbit Discovery
The brand new analysis builds on many years of labor by UOW scientists finding out Homo floresiensis, which was first uncovered in 2003 at Liang Bua on the Indonesian island of Flores. Nicknamed the hobbit due to its small stature, the species challenged long-standing concepts about human evolution. Though fossils present that Homo floresiensis disappeared round 50,000 years in the past, precisely why they vanished has remained unsure.
Drought, Water Shortage, and Prey Collapse
Stalagmites, which develop over time from mineral deposits left by dripping water, act as pure information of previous rainfall. By analyzing these formations, scientists reconstructed historical local weather patterns. On the identical time, oxygen isotope evaluation of fossil tooth enamel revealed that the pygmy elephants depended closely on river water that grew to become more durable to search out s situations grew drier.
Round 61,000 years in the past, the pygmy elephant inhabitants declined sharply. As a result of these animals have been a key meals supply, their drop in numbers would have positioned further strain on the hobbits.
“Floor freshwater, Stegodon and Homo floresiensis all decline on the identical time, displaying the compounding results of ecological stress,” UOW Honorary Fellow Dr. Gert van den Berg stated. “Competitors for dwindling water and meals in all probability compelled the hobbits to desert Liang Bua.”
Attainable Encounters With Trendy People
Fossils present that Homo floresiensis lived on Flores earlier than the earliest confirmed presence of contemporary people on the island. Nevertheless, Homo sapiens have been shifting via the Indonesian archipelago across the identical time the hobbits disappeared.
“It is doable that because the hobbits moved looking for water and prey, they encountered trendy people,” Dr. Gagan stated. “In that sense, local weather change could have set the stage for his or her last disappearance.”
I focus on a command that computes strange least-squares (OLS) ends in Mata, paying particular consideration to the construction of Stata packages that use Mata work capabilities.
That is the fifteenth submit within the collection Programming an estimation command in Stata. I like to recommend that you simply begin in the beginning. See Programming an estimation command in Stata: A map to posted entries for a map to all of the posts on this collection.
An OLS command with Mata computations
The Stata command myregress11 computes the ends in Mata. The syntax of the myregress11 command is
the place indepvars can include issue variables or time-series variables.
Within the the rest of this submit, I focus on the code for myregress11.ado. I like to recommend that you simply click on on the file identify to obtain the code. To keep away from scrolling, view the code within the do-file editor, or your favourite textual content editor, to see the road numbers.
*! model 11.0.0 11Jan2016
program outline myregress11, eclass sortpreserve
model 14
syntax varlist(numeric ts fv) [if] [in] [, noCONStant ]
marksample touse
gettoken depvar indepvars : varlist
_fv_check_depvar `depvar'
fvexpand `indepvars'
native cnames `r(varlist)'
tempname b V N rank df_r
mata: mywork("`depvar'", "`cnames'", "`touse'", "`fixed'", ///
"`b'", "`V'", "`N'", "`rank'", "`df_r'")
if "`fixed'" == "" {
native cnames `cnames' _cons
}
matrix colnames `b' = `cnames'
matrix colnames `V' = `cnames'
matrix rownames `V' = `cnames'
ereturn submit `b' `V', esample(`touse') buildfvinfo
ereturn scalar N = `N'
ereturn scalar rank = `rank'
ereturn scalar df_r = `df_r'
ereturn native cmd "myregress11"
ereturn show
finish
mata:
void mywork( string scalar depvar, string scalar indepvars,
string scalar touse, string scalar fixed,
string scalar bname, string scalar Vname,
string scalar nname, string scalar rname,
string scalar dfrname)
{
actual vector y, b, e, e2
actual matrix X, XpXi
actual scalar n, ok
y = st_data(., depvar, touse)
X = st_data(., indepvars, touse)
n = rows(X)
if (fixed == "") {
X = X,J(n,1,1)
}
XpXi = quadcross(X, X)
XpXi = invsym(XpXi)
b = XpXi*quadcross(X, y)
e = y - X*b
e2 = e:^2
ok = cols(X) - diag0cnt(XpXi)
V = (quadsum(e2)/(n-k))*XpXi
st_matrix(bname, b')
st_matrix(Vname, V)
st_numscalar(nname, n)
st_numscalar(rname, ok)
st_numscalar(dfrname, n-k)
}
finish
Let’s break this 74-line program into acquainted items to make it simpler to grasp. Strains 2–35 outline the ado-command, and contours 37–74 outline the Mata work operate that’s utilized by the ado-command. Though there are extra particulars, I used this construction in mymean8.ado, which I mentioned in Programming an estimation command in Stata: A primary ado-command utilizing Mata.
The ado-command has 4 components.
Strains 5–14 parse what the person typed, establish the pattern, and create short-term names for the outcomes returned by our Mata work operate.
Strains 16-17 name the Mata work operate.
Strains 19–31 submit the outcomes returned by the Mata work operate to e().
Line 33 shows the outcomes.
The Mata operate mywork() additionally has 4 components.
Strains 39–43 parse the arguments.
Strains 46–48 declare vectors, matrices, and scalars which can be native to mywork().
Strains 54–64 compute the outcomes.
Strains 66–70 copy the computed outcomes to Stata, utilizing the names that had been handed in arguments.
Line 8 makes use of gettoken to retailer the identify of the dependent variable within the native macro depvar and the names of the unbiased variables within the native macro indepvars. Line 9 makes use of _fv_check_depvar to verify that the identify of the dependent variable just isn’t an element variable.
Line 11 makes use of fvexpand to increase the issue variables in indepvars. Line 12 places the expanded names saved in r(varlist) by fvexpand within the native macro cnames. A single issue variable can indicate multiple coefficient. fvexpand finds the canonical names for these coefficients and returns them in r(varlist). I’ve not used fvexpand till now as a result of the Stata instructions that I used to compute the outcomes robotically created the coefficient names. Mata capabilities are designed for velocity, so I need to create the coefficient names once I use them.
Instance 1 illustrates how one issue variable can indicate multiple coefficient.
The tabulate outcomes present that there are 5 ranges in rep78. fvexpand finds the degrees and creates a listing of the names of the implied indicator variables 1b.rep78, 2.rep78, 3.rep78, 4.rep78, and 5.rep78. Evaluating the outcomes from summarize 2.rep78 and tabulate rep78 illustrates this notation. The b in 1b.rep78 identifies degree 1 as the bottom class to be omitted when there’s a fixed within the mannequin. Kind assist fvvarlist for extra particulars.
Line 14 creates the short-term names for the outcomes. For instance, it shops a secure, short-term identify within the native macro b that can be utilized for the matrix storing the purpose estimates. I mentioned this utilization within the part Utilizing short-term names for world objects in Programming an estimation command in Stata: A primary ado-command utilizing Mata.
Strains 16 and 17 name the Mata operate mywork(), which makes use of the data contained within the native macros depvar, cnames, touse, and fixed to compute the outcomes which can be returned within the Stata objects whose names are saved within the native macros b, V, N, rank, and df_r.
Line 20 appends _cons to the native macro cnames, if the person specified the choice noconstant.
Strains 23–25 put row names on the vector of level estimates and row and column names on the matrix containing the estimated variance-covariance of the estimator (VCE).
Strains 27–31 submit the outcomes to e().
Line 33 shows a normal Stata output desk, utilizing the ends in e(b), e(V), and e(df_r).
Observe that the native macro b created on line 14 incorporates a short lived identify that’s handed to mywork() on line 17 and that the Stata matrix whose identify is contained within the native macro b is used on strains 23 and 27. mywork() places the vector of level estimates within the Stata matrix whose identify is saved within the native macro b. Additionally word that the native macro V created on line 14 incorporates a short lived identify that’s handed to mywork() on line 17 and that the Stata matrix whose identify is contained within the native macro V is used on strains 24, 25, and 27. mywork() places the estimated VCE within the Stata matrix whose identify is saved within the native macro V.
To see how this works, let’s focus on the mywork() operate intimately. Strains 39–43 declare that mywork() returns nothing, it’s void, and declare that mywork() accepts 9 arguments, every of which is a string scalar. The primary 4 arguments are inputs; depvar incorporates the identify of the unbiased variable, indepvars incorporates the names of the unbiased variables, touse incorporates the identify of the sample-identification variable, and fixed incorporates both noconstant or is empty. The values of those arguments are used on strains 50, 51, and 54 to create the vector y and the matrix X.
The final 5 arguments include names used to put in writing the outcomes again to Stata. mywork() writes the outcomes again to Stata utilizing the passed-in short-term names. For instance, line 17 reveals that the Mata string scalar bname incorporates the short-term identify saved within the native macro b. Line 66 copies the outcomes saved within the transpose of the Mata vector b to a Stata matrix whose identify is saved within the Mata string scalar bname. (Line 60 reveals that the vector b incorporates the OLS level estimates.) Strains 23 and 27 then use this Stata vector whose identify is saved within the native macro b. Equally, line 17 reveals that the Mata string scalar Vname incorporates the short-term identify saved within the native macro V. Line 67 copies the outcomes saved within the Mata matrix V to a Stata matrix whose identify is saved within the Mata string scalar Vname. (Line 64 reveals that V incorporates the estimated VCE.) Strains 24, 25, and 27 then use this Stata matrix whose identify is saved within the native macro V. The arguments nname, rname, and dfrname are used analogously to return the outcomes for the variety of observations, the rank of the VCE, and the levels of freedom of the residuals.
Strains 50–64 compute the purpose estimates and the VCE. Apart from line 54, I mentioned these computations in Programming an estimation command in Stata: A primary ado-command utilizing Mata. Line 54 causes a column of 1s to be joined to the covariate matrix X when the string scalar fixed is empty. Strains 5 and 16 indicate that the Mata string scalar fixed incorporates noconstant when the person specifies the noconstant choice and that it’s empty in any other case.
Finished and undone
I mentioned the code for myregress11.ado, which makes use of Mata to compute OLS level estimates and a VCE that assumes unbiased and identically distributed observations. The construction of the code is identical because the one which I utilized in mymean7.ado and mymean8.ado, mentioned in Programming an estimation command in Stata: A primary ado-command utilizing Mata, though there are extra particulars within the OLS program.
Key to this construction is that the Mata work operate accepts two forms of arguments: the names of Stata objects which can be inputs and short-term names which can be used to put in writing the outcomes again to Stata from Mata.
Within the subsequent submit, I lengthen myregress11.ado to permit for strong or cluster-robust estimators of the VCE.
chain is a goal-oriented community of processes and inventory factors that delivers completed items to shops.
Think about a luxurious vogue retailer with a central distribution chain that delivers to shops worldwide (the USA, Asia-Pacific, and EMEA) from a warehouse positioned in France.
Distribution Chain of a Trend Retailer from a system standpoint – (Picture by Samir Saci)
When the retailer 158 positioned atNanjing West Street (Shanghai, China) wants 3 leather-based baggage (reference AB-7478) by Friday, a distribution planner creates a replenishment order.
This order is shipped to the warehouse for preparation and transport.
From this level on, the distribution planner loses direct management.
All of the steps from a replenishment order creation to its supply on the retailer
The cargo’s destiny is determined by a fancy distribution chain involving IT, warehouse, and transportation groups.
Nonetheless, if something goes incorrect, the planner is the one who has to clarify why the shop missed gross sales because of late deliveries.
Every step could be a supply of delays.
Why solely 73% of shipments have been delivered on time final week?
If shipments miss a cutoff time, this can be because of late order transmission, excessively lengthy preparation time, or a truck that departed the warehouse too late.
Sadly, static dashboards should not at all times adequate to seek out root causes!
Due to this fact, planners usually analyse the information (manually utilizing Excel) to determine the basis causes of every failure.
In my profession, I’ve seen whole groups spend dozens of hours per week manually crunching information to reply fundamental questions.
Essentially the most sophisticated job in Provide Chain Administration is coping with individuals!
It is a essential function as a result of managers (transportation, warehouse, air freight) will at all times attempt to shift duty amongst themselves to cowl their very own groups.
Challenges confronted by the distribution planners to seek out the basis causes – (Picture by Samir Saci)
As a result of root trigger evaluation is step one in steady enchancment, we should develop an answer to assist planners.
You’ll by no means remedy operational issues when you can’t discover the basis causes.
Due to this fact, I needed to experiment with how an AI Agent can assist distribution planning groups in understanding provide chain failures.
I’ll ask the AI agent to resolve actual disputes between groups to find out whether or not one staff is misinterpreting its personal KPIs.
Instance of a state of affairs the place Claude can arbitrate between conflicting arguments – (Picture by Samir Saci)
The thought is to make use of the reasoning capabilities of Claude fashions to determine points from timestamps and boolean flags alone and to reply natural-language questions.
We wish the instrument to reply open questions with data-driven insights with out hallucinations.
What’s the duty of warehouse groups within the total efficiency?
These are precise questions that distribution planning managers should reply on a day-to-day foundation
This agentic workflow makes use of the Claude Opus 4.6 mannequin, linked by way of an MCP Server to a distribution-tracking database to reply our questions.
MCP Implementation utilizing Claude Opus 4.6 – (Picture by Samir Saci)
I’ll use a real-world state of affairs to check the flexibility of the agent to assist groups in conducting analyses past what static dashboards can present:
Remedy conflicts between groups (transportation vs. warehouse groups)
Perceive the impression of cumulative delays
Assess the efficiency of every leg
Perceive Logistics Efficiency Administration
We’re supporting a luxurious vogue retail firm with a central distribution warehouse in France, delivering to shops worldwide by way of highway and air freight.
The Worldwide Distribution Chain of a Trend Retailer
A staff of provide planners manages retailer stock and generates replenishment orders within the system.
Distribution chain: from order creation to retailer supply – (Picture by Samir Saci)
From this, a cascade of steps till retailer supply
Replenishment orders are created within the ERP
Orders are transmitted to the Warehouse Administration System (WMS)
Orders are ready and packed by the warehouse staff
Transportation groups organise every part from the pickup on the warehouse to the shop supply by way of highway and air freight
On this chain, a number of groups are concerned and interdependent.
Warehouse Operations – (CAD by Samir Saci)
Our warehouse staff can begin preparation solely after orders are obtained within the system.
Their colleagues within the transportation staff count on the shipments to be prepared for loading when the truck arrives on the docks.
This creates a cascade of potential delays, particularly contemplating cut-off occasions.
Key timestamps and cut-off occasions – (Picture by Samir Saci)
Order Reception: if an order is obtained after 18:00:00, it can’t be ready the day after (+24 hours in LT)
Truck leaving: if an order isn’t packed earlier than 19:00:00, it can’t be loaded the identical day (+24 hours in LT)
Arrival at Airport: in case your cargo arrives after 00:30:00, it misses the flight (+24 hours LT)
Touchdown: in case your flight lands after 20:00:00, you could wait an additional day for customs clearance (+24 hours LT)
Retailer Supply: in case your vehicles arrive after 16:30:00, your shipments can’t be obtained by retailer groups (+24 hours LT)
If a staff experiences delays, they may have an effect on the remainder of the chain and, finally, the lead time to ship to the shop.
Instance on how delays on the airport can impression the remainder of the distribution chain – (Picture by Samir Saci)
Hopefully, we’re monitoring every step within the supply course of with timestamps from the ERP, WMS, and TMS.
Timestamps and leadtime monitoring shipments throughout the distribution chain – (Picture by Samir Saci)
For every factor of the distribution chain, we now have:
The timestamp of the completion of the duty Instance: we file the timestamp when the order is obtained within the Warehouse Administration System (WMS) and is prepared for preparation.
A goal timing for the duty completion
For the step linked to a cut-off time, we generate a Boolean Flag to confirm whether or not the related cut-off has been met.
To study extra about how the Boolean flags are outlined and what’s a cut-off, you possibly can verify this tutorial
Downside Assertion
Our distribution supervisor doesn’t need to see his staff manually crunching information to know the basis trigger.
This cargo has been ready two hours late, so it was not packed on time and needed to wait the subsequent day to be shipped from the warehouse.
It is a frequent difficulty I encountered whereas answerable for logistics efficiency administration at an FMCG firm.
I struggled to clarify to decision-makers that static dashboards alone can’t account for failures in your distribution chain.
In an experiment at my startup, LogiGreen, we used Claude Desktop, linked by way of an MCP server to our distribution planning instrument, to assist distribution planners of their root-cause analyses.
And the outcomes are fairly fascinating!
How AI Brokers Can Analyse Provide Chain Failures?
Allow us to now see what information our AI agent has available and the way it can use it to reply our operational questions.
We put ourselves within the sneakers of our distribution planning supervisor utilizing the agent for the primary time.
P.S: These eventualities come from precise conditions I’ve encountered after I was accountable for the efficiency administration for worldwide provide chains.
Distribution Planning
We took one month of distribution operations:
11,365 orders created and delivered
From December sixteenth to January sixteenth
For the enter information, we collected transactional information from the techniques (ERP, WMS and TMS) to gather timestamps and create flags.
A fast Exploratory Information Evaluation exhibits that some processes exceeded their most lead-time targets.
Affect of transmission and selecting time on loading lead time for a pattern of 100 orders – (Picture by Samir Saci)
On this pattern of 100 shipments, we missed the loading cutoff time for not less than six orders.
This means that the truck departed the warehouse en path to the airport with out these shipments.
These points possible affected the remainder of the distribution chain.
What does our agent have available?
Along with the lead occasions, we now have our boolean flags.
Instance of boolean flags variability: blue signifies that the cargo is late for this particular distribution step – (Picture by Samir Saci)
These booleans measure if the shipments handed the method on time:
Transmission: Did the order arrive on the WMS earlier than the cut-off time?
Loading: Are the pallets within the docks when the truck arrived for the pick-up?
Airport: The truck arrived on time, so we wouldn’t miss the flight.
Customized Clearance: Did the flight land earlier than customs closed?
Supply: We arrived on the retailer on time.
Overview of the supply efficiency for this evaluation – (Picture by Samir Saci)
For barely lower than 40% of shipments, not less than one boolean flag is ready to False.
This means a distribution failure, which can be attributable to a number of groups.
Can our agent present clear and concise explaination that can be utilized to implement motion plans?
Allow us to check it with complicated questions.
Check 1: A distribution planner requested Claude in regards to the flags
To familiarise herself with the instrument, she started the dialogue by asking the agent what he understood from the information obtainable to him.
Definition of the Boolean flags in accordance with Claude – (Picture by Samir Saci)
This demonstrates that my MCP implementation, which makes use of docstrings to outline instruments, conforms to our expectations for the agent.
Check 2: Difficult its methodology
Then she requested the agent how we’d use these flags to evaluate the distribution chain’s efficiency.
Root Trigger Evaluation Methodology of the Agent – (Picture by Samir Saci)
On this first interplay, we sense the potential of Claude Opus 4.8 to know the complexity of this train with the minimal info offered within the MCP implementation.
Testing the agent with real-world operational eventualities
I’m now sufficiently assured to check the agent on real-world eventualities encountered by our distribution planning staff.
They’re answerable for the end-to-end efficiency of the distribution chain, which incorporates actors with divergent pursuits and priorities.
Challenges confronted by the distribution planners – (Picture by Samir Saci)
Allow us to see whether or not our agent can use timestamps and boolean flags to determine the basis causes and arbitrate potential conflicts.
All of the potential failures that have to be defined by Claude – (Picture by Samir Saci)
Nonetheless, the actual check isn’t whether or not the agent can learn information.
The query is whether or not it will probably navigate the messy, political actuality of distribution planning, the place groups blame each other and dashboards could obscure the reality.
Let’s begin with a difficult state of affairs!
State of affairs 1: difficult the native last-mile transportation staff
In response to the information, we now have 2,084 shipments that solely missed the most recent boolean flagSupply OnTime.
The central staff assumes that is as a result of last-mile leg between the airport and the shop, which is underneath the native staff’s duty.
For instance, the central staff in France is blaming native operations in China for late deliveries in Shanghai shops.
The native supervisor disagrees, pointing to delays on the airport and through customs clearance.
P.S.: This state of affairs is frequent in worldwide provide chains with a central distribution platform (in France) and native groups abroad (within the Asia-Pacific, North America, and EMEA areas).
Allow us to ask Claude if it will probably discover who is true.
Preliminary nuance of the agent based mostly on what has been extracted from information – (Picture by Samir Saci)
Claude Opus 4.6 right here demonstrates precisely the behaviour that I anticipated from him.
The agent supplies nuance by evaluating the flag-based strategy to static dashboards with an evaluation of durations, because of the instruments I geared up it with.
Evaluation of variance for the final leg (Airport -> Retailer) underneath the duty of the native staff – (Picture by Samir Saci)
This states two issues:
Native staff’s efficiency (i.e. Airport -> Retailer) isn’t worse than the upstream legs managed by the central staff
Shipments depart the airport on time
This means that the drawback lies between takeoff and last-mile retailer supply.
Reminder of the general distribution chains – (Picture by Samir Saci)
That is precisely what Claude demonstrates under:
Demonstration of Air Freight’s partial duty – (Picture by Samir Saci)
The native staff isn’t the one reason behind late deliveries right here.
Nonetheless, they nonetheless account for a big share of late deliveries, as defined in Claude’s conclusion.
Claude’s conclusion – (Picture by Samir Saci)
What did we study right here?
The native staff accountable nonetheless wants to enhance its operations, however it’s not the one social gathering contributing to the delays.
We have to focus on with the Air Freight staff the variability of their lead occasions, which impacts total efficiency, even once they don’t miss the cut-off occasions.
In State of affairs 1, the agent navigated a disagreement between headquarters and an area staff.
And it discovered that each side had some extent!
However what occurs when a staff’s argument is predicated on a basic misunderstanding of how the KPIs work?
State of affairs 2: a battle between the warehouse and the central transportation groups
We now have 386 shipments delayed, the place the solely flag at False is Loading OnTime.
The warehouse groups argue that these delays are as a result of late arrival of vehicles (i.e., orders ready and prepared on time have been awaiting truck loading).
Is that true? No, this declare is because of a misunderstanding of the definition of this flag.
Allow us to see if Claude can discover the proper phrases to clarify that to our distribution planner.
Reminder of the general distribution chains – (Picture by Samir Saci)
As a result of we would not have a flag indicating whether or not the truck arrived on time (solely a cutoff to find out whether or not it departed on time), there’s some ambiguity.
Claude will help us to make clear that.
Preliminary Reply from Claude – (Picture by Samir Saci)
For this query, Claude precisely did what I anticipated:
It used the instrument to analyse the distribution of lead occasions per course of (Transmission, Selecting and Loading)
Defined the proper significance of this flag to the distribution planner in the important thing perception paragraph
Now that the distribution planner is aware of that it’s incorrect, Claude will present the proper components to reply to the warehouse staff.
Right the assertion and information – (Picture by Samir Saci)
Not like within the first state of affairs, the comment (or query) arises from a misunderstanding of the KPIs and flags.
Claude did an ideal job offering a solution that is able to share with the warehouse operations staff.
In State of affairs 1, each groups have been partially proper. In State of affairs 2, one staff was merely incorrect.
In each instances, the reply was buried within the information, not seen on any static dashboard.
What can we study from these two eventualities?
Static dashboards won’t ever settle these debates.
Even when they’re a key a part of Logistic Efficiency Administration, as outlined on this article, they may by no means totally clarify all late deliveries.
They present what occurred, not why, and never who’s really accountable.
Instance of Static Visuals deployed in distribution planning report – (Picture by Samir Saci)
Distribution planners know this. That’s why they spend dozens of hours per week manually crunching information to reply questions their dashboards can’t.
Moderately than trying to construct a complete dashboard that covers all eventualities, we are able to deal with a minimal set of boolean flags and calculated lead occasions to assist customized analyses.
These analyses can then be outsourced to an agent, resembling Claude Opus 4.6, which is able to use its data of the information and reasoning abilities to supply data-driven insights.
Visuals Generated by Claude for the highest administration – (Picture by Samir Saci)
We will even use it to generate interactive visuals to convey a particular message.
Within the visible above, the concept is to indicate that relying solely on Boolean flags could not totally replicate actuality.
Flag-Primarily based attribution was most likely the supply of loads conflicts.
All of those visuals have been generated by a non-technical consumer who communicated with the agent utilizing pure language.
That is AI-powered analysis-as-a-service for provide chain efficiency administration.
Conclusion
Reflecting on this experiment, I anticipate that agentic workflows like this may change an growing variety of reporting tasks.
The benefit right here is for the operational groups.
They don’t have to depend on enterprise intelligence groups to construct dashboards and reviews to reply their questions.
Can I export this PowerBI dashboard in Excel?
These are frequent questions you could encounter when creating reporting options for provide chain operations groups.
It’s as a result of static dashboards won’t ever reply all of the questions planners have.
Instance of visuals constructed by Claude to reply one of many questions of our planners – (Picture by Samir Saci)
With an agentic workflow like this, you empower them to construct their very own reporting instruments.
The distribution planning use case centered on diagnosing previous failures. However what about future selections?
We utilized the identical agentic strategy, utilizing Claude linked by way of MCP to a FastAPI optimisation engine, to a really totally different drawback: Sustainable Provide Chain Community Design.
Join Claude to a module of Sustainable Provide Chain Community Design – (Picture by Samir Saci)
The purpose was to assist provide chain administrators in redesigning the community throughout the context of the sustainability roadmap.
The place ought to we produce to attenuate the environmental impression of our provide chain?
Our AI agent is used to run a number of community design eventualities to estimate the impression of key selections (e.g., manufacturing unit openings or closures, worldwide outsourcing) on manufacturing prices and environmental impacts.
Community Design Eventualities – (Picture by Samir Saci)
The target is to supply decision-makers with data-driven insights.
This was the primary time I felt that I could possibly be changed by an AI.
Instance of trade-off evaluation generated by Claude – (Picture by Samir Saci)
The standard of this evaluation is corresponding to that produced by a senior guide after weeks of labor.
Claude produced it in seconds.
Extra particulars on this tutorial,
Do you need to study extra about distribution planning?
Why Lead Time is Necessary?
Provide Planners use Stock Administration Guidelines to find out when to create replenishment orders.
Demand Variability that retail shops face
These guidelines account for demand variability and supply lead time to find out the optimum reorder level that covers demand till items are obtained.
Components of the protection inventory – (Picture by Samir Saci)
This reorder level is determined by the common demand over the lead time.
However we are able to adapt it based mostly on the precise efficiency of the distribution chain.
For extra particulars, see the entire tutorial.
About Me
Let’s join on LinkedIn and Twitter; I’m a Provide Chain Engineer utilizing information analytics to enhance logistics operations and cut back prices.
For consulting on analytics and sustainable provide chain transformation, be happy to contact me by way of Logigreen Consulting.
When you have any questions, you possibly can depart a remark in my app: Provide Science.
With coding and math, you could have clear-cut, appropriate solutions that you would be able to examine, William Isaac, a analysis scientist at Google DeepMind, informed me once I met him and Julia Haas, a fellow analysis scientist on the agency, for an unique preview of their work, which is printed in Nature as we speak. That’s not the case for ethical questions, which generally have a spread of acceptable solutions: “Morality is a vital functionality however laborious to judge,” says Isaac.
“Within the ethical area, there’s no proper and flawed,” provides Haas. “Nevertheless it’s not by any means a free-for-all. There are higher solutions and there are worse solutions.”
The researchers have recognized a number of key challenges and urged methods to handle them. However it’s extra a want checklist than a set of ready-made options. “They do a pleasant job of bringing collectively totally different views,” says Vera Demberg, who research LLMs at Saarland College in Germany.
Higher than “The Ethicist”
Quite a lot of research have proven that LLMs can present outstanding ethical competence. One research printed final yr discovered that folks within the US scored moral recommendation from OpenAI’s GPT-4o as being extra ethical, reliable, considerate, and proper than recommendation given by the (human) author of “The Ethicist,” a well-liked New York Occasions recommendation column.
The issue is that it’s laborious to unpick whether or not such behaviors are a efficiency—mimicking a memorized response, say—or proof that there’s in reality some form of ethical reasoning happening contained in the mannequin. In different phrases, is it advantage or advantage signaling?
This query issues as a result of a number of research additionally present simply how untrustworthy LLMs could be. For a begin, fashions could be too desperate to please. They’ve been discovered to flip their reply to an ethical query and say the precise reverse when an individual disagrees or pushes again on their first response. Worse, the solutions an LLM provides to a query can change in response to how it’s introduced or formatted. For instance, researchers have discovered that fashions quizzed about political values may give totally different—generally reverse—solutions relying on whether or not the questions provide multiple-choice solutions or instruct the mannequin to reply in its personal phrases.
In an much more placing case, Demberg and her colleagues introduced a number of LLMs, together with variations of Meta’s Llama 3 and Mistral, with a collection of ethical dilemmas and requested them to choose which of two choices was the higher end result. The researchers discovered that the fashions typically reversed their alternative when the labels for these two choices have been modified from “Case 1” and “Case 2” to “(A)” and “(B).”
Additionally they confirmed that fashions modified their solutions in response to different tiny formatting tweaks, together with swapping the order of the choices and ending the query with a colon as a substitute of a query mark.
This provide is obtainable from Woot.com, an Amazon-owned offers web site. There’s a restrict of two models per buyer. The one caveat is that you just gained’t get a full producer’s guarantee, however Woot provides its personal 90-day guarantee.
What actually units the Philips S2108 Transportable Bluetooth Speaker aside is its design. Not solely does it look fairly superior and enjoyable, due to the built-in lighting and trendy look, however it’s also tremendous transportable. The unit measures solely 5.51 x 4.02 x 3.98 in and weighs simply 1.04lbs.
When it comes to sound high quality, you’re getting fairly a pleasant setup for the dimensions (and worth). It has a 5W RMS full-range driver and a passive radiator, delivering punchy bass.
There’s a 1,800mAh battery packed inside, providing about seven hours of use on a full cost. You’ll even get a microphone included, so you can even use it to make calls. In fact, it options Bluetooth for wi-fi connections, however you should utilize it as a standalone music participant, due to a TF card reader.
For simply $15, I’m struggling to search out causes to not get a Philips S2108 Transportable Bluetooth Speaker. Go get yours when you can! Woot mentions the deal can be out there for 2 extra days or “till bought out”.
Thanks for being a part of our group. Learn our Remark Coverage earlier than posting.
Heavy collisions on the Massive Hadron Collider (LHC) have revealed the faintest hint of a wake left by a quark slicing by means of trillion-degree nuclear matter — hinting that the primordial soup of the universe might have actually been extra soup-like than we thought.
The brand new findings from the LHC’s Compact Muon Solenoid (CMS) collaboration present the primary clear proof of a delicate “dip” in particle manufacturing behind a high-energy quark because it traverses quark-gluon plasma — a droplet of primordial matter thought to have crammed the universe microseconds after the Large Bang.
A research describing the outcomes, revealed Dec. 25, 2025, within the journal Physics Letters B, supplies a tantalizing have a look at the universe in its first moments.
A photograph of the Compact Muon Solenoid (CMS) detector on the Massive Hadron Collider, which carried out the brand new experiments. (Picture credit score: Hertzog, Samuel Joseph: CERN)
Re-creating early-universe situations within the lab
When heavy atomic nuclei collide at near-light pace contained in the LHC, they briefly soften into an unique state generally known as quark-gluon plasma.
On this excessive atmosphere, “the density and temperature is so excessive that the common atom construction is now not maintained,” Yi Chen, an assistant professor of physics at Vanderbilt College and a member of the CMS workforce, advised Dwell Science by way of e-mail. As an alternative, “all of the nuclei are overlapping collectively and forming the so-called quark-gluon plasma, the place quarks and gluons can transfer past the confines of the nuclei. They behave extra like a liquid.”
This plasma droplet is awfully small — about 10-14 meters throughout, or 10,000 occasions smaller than an atom — and vanishes virtually immediately. But inside that fleeting droplet, quarks and gluons — the elemental carriers of the sturdy nuclear drive that holds atomic nuclei collectively — circulation collectively in ways in which resemble an ultrahot liquid greater than a easy fuel of particles.
Physicists need to perceive how energetic particles work together with this unusual medium. “In our research, we need to research how various things work together with the small droplet of liquid that’s created within the collisions,” Chen mentioned. “For instance, how would a excessive power quark traverse by means of this scorching liquid?”
Get the world’s most fascinating discoveries delivered straight to your inbox.
Idea predicts that the quark would go away a detectable wake within the plasma behind it, a lot as a ship slicing although water would. “We can have water pushed ahead with the boat in the identical path, however we additionally count on a small dip in water stage behind the boat, as a result of water is pushed away,” Chen mentioned.
In apply, nonetheless, disentangling the “boat” from the “water” is much from easy. The plasma droplet is tiny, and the experimental decision is proscribed. On the entrance of the quark’s path, the quark and plasma work together intensely, making it tough to inform which alerts come from which. However behind the quark, the wake — if current — should be a property of the plasma itself.
“So we need to discover this small dip within the again aspect,” Chen mentioned.
A clear probe with Z bosons
To isolate that wake, the workforce turned to a particular accomplice particle: the Z boson, one of many carriers of the weak nuclear drive — one of many 4 elementary interactions, together with the electromagnetic, sturdy, and gravitational forces — accountable for sure atomic and subatomic decay processes. In sure collisions, a Z boson and a high-energy quark are produced collectively, recoiling in reverse instructions.
An illustration of the aftermath of a high-energy collision that created a quark-gluon plasma at Brookhaven Lab’s Relativistic Heavy Ion Collider. (Picture credit score: Brookhaven Nationwide Laboratory)
This is the place the Z boson turns into essential. “The Z bosons are accountable for the weak drive, and so far as the plasma is anxious, Z simply escapes and is gone from the image,” Chen mentioned. In contrast to quarks and gluons, Z bosons barely work together with the plasma. They go away the collision zone unscathed, offering a clear indicator of the quark’s authentic path and power.
This setup permits physicists to give attention to the quark because it plows by means of the plasma, with out worrying that its accomplice particle has been distorted by the medium. In essence, the Z boson serves as a calibrated marker, making it simpler to seek for delicate adjustments in particle manufacturing behind the quark.
The CMS workforce measured correlations between Z bosons and hadrons — composite particles fabricated from quarks — rising from the collision. By analyzing what number of hadrons seem within the “backward” path relative to the quark’s movement, they might seek for the expected wake.
A tiny-but-important sign
The result’s delicate. “On common, within the again path, we see there’s a change of lower than 1% within the quantity of plasma,” Chen mentioned. “It’s a very small impact (and partly why it took so lengthy for individuals to show it experimentally).”
Nonetheless, that less-than-1% suppression is exactly the type of signature anticipated from a quark transferring power and momentum to the plasma, leaving a depleted area in its wake. The workforce experiences that that is the primary time such a dip has been clearly detected in Z-tagged occasions.
The form and depth of the dip encode details about the plasma’s properties. Returning to her analogy, Chen famous that if water flows simply, a dip behind a ship fills in shortly. If it behaves extra like honey, the despair lingers. “So finding out how this dip seems to be … provides us info on the plasma itself, with out the complication of the boat,” she mentioned.
Wanting again to the early universe
The findings even have cosmological implications. The early universe, shortly after the Large Bang, is believed to have been full of quark-gluon plasma earlier than cooling into protons, neutrons and, finally, atoms.
“This period just isn’t immediately observable by means of telescopes,” Chen says. “The universe was opaque again then.” Heavy-ion collisions present “a tiny glimpse on how the universe behaved throughout this period,” she added.
For now, the noticed dip is “simply the beginning,” Chen concluded. “The thrilling implication of this work is that it opens up a brand new venue to realize extra perception on the property of the plasma. With extra knowledge collected, we will research this impact extra exactly and be taught extra in regards to the plasma within the close to future.”
This publish is the fourth of a collection of (most likely) seven on inhabitants points within the Pacific, re-generating the charts I utilized in a keynote speech earlier than the November 2025 assembly of the Pacific Heads of Planning and Statistics in Wellington, New Zealand. The seven items of the puzzle are:
At present’s publish is all about creating this one eye-catching chart, evaluating the variety of folks in a rustic with its diaspora—individuals who ethnically or in any other case establish with the nation however reside abroad:
Because the chart notes, the diaspora numbers are an underestimate as a result of I’ve solely drawn on the New Zealand, Australian and USA censuses, and solely partly for that. For instance, I made a decision the variety of Papua New Guineans residing in New Zealand and USA wasn’t materials and so they haven’t been included. I’m assured this doesn’t change the look of the chart, however clearly if I had been making an attempt to create the absolute best complete estimates I ought to embody these.
It’s a reasonably dramatic story. We will see seven nations with extra folks residing abroad than within the nation itself: Niue, Pitcairn Islands, Prepare dinner Islands, Tokelau, Samoa, Tonga and Marshall Islands. Aside from Marshall Islands, these are all Polynesian. The truth is, Tuvalu is the one Polynesian nation on this assortment that has extra folks residing in-country than abroad (for now—that is prone to change now that Australia has agreed with Tuvalu for a daily annual consumption of individuals by way of lottery).
Notice that the three French territories (New Caledonia, Wallis and Futuna, and French Polynesia), and three American territories (American Samoa, Northern Mariana Islands and Guam) have been excluded from the plot.
For the 4 small nations alongside the underside row of the chart, the distinction is especially vital—an enormous majority of their persons are residing abroad. From my final publish we all know that numerous these are in Auckland. Pitcairn is the one considered one of these 4 that has extra of its diaspora in Australia than New Zealand (there are Pitcairn-identifying folks within the UK too, however not sufficient to make me systematically add within the UK to my information in what was primarily a practical and visible train—see feedback above).
96% of Niueans, 90% of Prepare dinner Islanders and 59% of Marshall Islanders reside abroad.
And for the 4 nations on the high of the chart—considerably bigger and distinctly poorer than most of the others, and three of them Melanesian—we see no vital diaspora, relative to the house inhabitants.
Right here’s the code that creates this bar chart. Notice that the info listed here are typed in by hand (!!) from numerous sources—not one thing I’d usually do, and would by no means advocate apart from these actually “small information” conditions. I’ve checked it as totally as I moderately can, and the model I utilized in my speak that I’m adapting right here was additionally peer reviewed by a piece colleague.
# This script attracts some charts of the diaspora of Pacific island nations and territories.# It is fairly tough and definitely incomplete. The method was to make use of the census figures# for resident inhabitants of Pacific islander ancestry at present residing in USA, Australia# and New Zealand; and evaluate that to populations resideing within the nations themselves.## All kinds of identified limitations which we're ready to reside with for these crude comparisons:# - totally different reference years (2025 for populations, and census years are 2018, 2020 and 2021)# - populations residing within the Pacific islands themselves are all ethnicities (e.g. will embody# Australian-descent folks rsideing in these nations), have not bothered to restrict to only "true" Tongans, Samoans, and so forth# - not complete e.g. I do know there are some Pitcairn-descended folks in UK however have not included them. And naturally# there should be many others of those folks in nations aside from Australia, NZ and USA# - France not included in any respect. No ancestry information in French censuses so this is able to be difficult.## Peter Ellis 2025-11#---------------------Information prep-------------------------library(tidyverse)library(rsdmx)library(ISOcodes)# Present populations of PICTs:pops<-rsdmx::readSDMX("https://stats-sdmx-disseminate.pacificdata.org/relaxation/information/SPC,DF_POP_PROJ,3.0/A..MIDYEARPOPEST._T._T?startPeriod=2025&endPeriod=2025&dimensionAtObservation=AllDimensions")|>as_tibble()|>left_join(choose(ISOcodes::ISO_3166_1,Alpha_2,pict=Identify),by=c("GEO_PICT"="Alpha_2"))|>choose(pict,pop=obsValue)|>drop_na()# out of curiosity what's the whole inhabitants of all PICTs, Austrlaia and NZ collectively? (about 47m):picts_and_anz<-c(sum(pops$pop),28.1e6,5.3e6)sum(picts_and_anz)# https://instruments.summaries.stats.govt.nz/ethnic-group/tongan and comparable for 2023 NZ determine# desk builder for Australian 2021 figures - see `https://uncooked.githubusercontent.com/ellisp/blog-source/refs/heads/grasp/information/totalpercent20bypercent20pacific.csv`# Wikipedia for US figures, from 2020 census. Seek for e.g. "Palauans in USA wikipedia"diaspora<-tribble(~pict,~dest,~folks,"Tonga","New Zealand",97824,"Niue","New Zealand",34944,"Tokelau","New Zealand",9822,"Prepare dinner Islands","New Zealand",94176,"Samoa","New Zealand",213069,"Tuvalu","New Zealand",6585,"Fiji","New Zealand",25038+23808,# contains Fijian Indian"Papua New Guinea","Australia",22668,"Vanuatu","Australia",2380,"Solomon Islands","Australia",2704,"Kiribati","Australia",1263,"Fiji","Australia",48354,"Nauru","Australia",571,"Prepare dinner Islands","Australia",27494,"Tokelau","Australia",2544,"Tonga","Australia",43469,"Niue","Australia",6225,"Samoa","Australia",98022,"Tuvalu","Australia",995,"Pitcairn","Australia",1123,"Marshall Islands","USA",52624,# 47300 if simply 'alone' "Palau","USA",12202,"Micronesia, Federated States of","USA",21596)# Australia checked# New Zealand checked# USA checked#--------------------------Bar chart------------------------# information body to examine to get percentages.pops_with_prop<-pops|>inner_join(diaspora)|>mutate(pict=gsub("Federated States of","Fed. St.",pict))|>group_by(pict,pop)|>summarise(Abroad=sum(folks))|>ungroup()|>mutate(prop=Abroad/(pop+Abroad))|>mutate(pict=fct_reorder(pict,prop))pops_with_prop|>choose(-prop)|>rename(`Origin nation`=pop)|>collect(variable,worth,-pict)|>ggplot(aes(x=variable,y=worth,fill=variable))+geom_col(width=0.8)+facet_wrap(~pict,scales="free_y")+scale_y_continuous(label=comma)+scale_fill_manual(values=c("steelblue","brown"))+theme(legend.place="none",panel.spacing=unit(2,"strains"),plot.caption=element_text(color="grey50"))+labs(x="",y="Variety of folks",title="Pacific Islander diaspora, organized from lowest proportion abroad to highest",subtitle="Diaspora is a decrease sure of full determine as it's primarily based on simply Australia, USA and New Zealand censuses.",caption="Supply: PDH.Stat for populations; Australian, USA and New Zealand Censuses for diaspora.")
Another (not so good) visualisation
I used the identical information to additionally make this scatter plot:
However I don’t very similar to it. It’s troublesome to interpret, and whereas it has a bit of additional info (which nation the diaspora is in) this doesn’t outweigh the interpretation issues. it most likely shouldn’t have a log scale as we actually wish to add up the numbers; however utilizing a non-transformed scale makes it much more of a visible mess. I’m together with it right here actually only for the document and as an example that the primary attempt at visualising one thing isn’t all the time the most effective (and typically, a humble bar chart finally ends up being what you need). Right here’s the code for the scatter plot:
pops|>inner_join(diaspora)|>ggplot(aes(x=pop,y=folks,label=pict,color=dest))+geom_abline(slope=1,intercept=0,color="gray")+geom_point()+geom_text_repel(dimension=2.5)+scale_x_log10(label=comma)+scale_y_log10(label=comma)+scale_colour_manual(values=c("blue","black","darkred"))+labs(x="Individuals residing in origin nation in 2025",y="Diaspora in a foreign country, latest census",color="Disapora nation",title="Pacific Island house inhabitants and diaspora in numerous nations")