Southern Africa is known for its wealthy file of prehistoric life, together with dinosaurs. However round 182 million years in the past, large volcanic eruptions unfold lava throughout a lot of the inland Karoo Basin, the place many dinosaurs as soon as lived. After this occasion, the fossil file within the area turns into surprisingly quiet through the Jurassic Interval (which lasted from 201 million to 145 million years in the past).
Latest discoveries are starting to alter that image. They present that dinosaurs continued to reside in southern Africa lengthy after these dramatic lava flows.
New Dinosaur Tracks on South Africa’s Coast
In 2025, scientists reported dinosaur tracks about 140 million years outdated on a distant stretch of shoreline in South Africa’s Western Cape. These had been the primary tracks from that point interval within the area (the Cretaceous, 145 million to 66 million years in the past).
As ichnologists (finding out fossil tracks and traces), the staff recurrently works alongside the Western Cape coast close to Knysna. Most of their analysis focuses on tracks preserved in coastal aeolianites (cemented sand dunes) which are between 50,000 and 400,000 years outdated.
Throughout a go to in early 2025, they explored a small outcrop of rock fashioned within the early Cretaceous Interval. It’s the solely close by publicity of rock from that point, and far of it’s submerged at excessive tide. The staff hoped they may discover a theropod (dinosaur) tooth like one found there by a 13-year-old boy in 2017.
As a substitute, they discovered one thing much more thrilling. Linda Helm, a member of the group, noticed dinosaur tracks. A better look revealed greater than two dozen attainable footprints.
A Tiny Website With Large Significance
The Brenton Formation publicity may be very small, measuring not more than 40 meters lengthy and 5 meters large, with cliffs rising as much as 5 meters above the shore. Discovering dozens of tracks in such a restricted space means that dinosaurs had been pretty frequent on this area through the Cretaceous.
The researchers estimate the tracks are about 132 million years outdated. That makes them the youngest recognized dinosaur tracks in southern Africa (50 million years youthful than the youngest tracks reported from the Karoo Basin). Additionally they signify solely the second recognized set of Cretaceous dinosaur tracks in South Africa, and the second from the Western Cape. Some tracks are preserved on flat rock surfaces, whereas others seem in cross part inside the cliffs.
Southern Africa’s Dinosaur Fossil Document
Southern Africa holds an intensive file of vertebrate tracks and traces from the Mesozoic Period (the “Age of Dinosaurs,” from 252 million to 66 million years in the past, a time span that features the Jurassic), particularly within the Karoo Basin, which is stuffed with thick layers of sedimentary rock.
Nonetheless, later volcanic exercise created the Drakensberg Group, protecting many of those fossil-bearing layers with lava. Some dinosaurs could have briefly survived the preliminary eruptions, however they had been possible among the many final animals to reside within the Karoo Basin at the moment.
Because the supercontinent Gondwana started to interrupt aside close to the top of the Jurassic Interval and into the early Cretaceous Interval, smaller basins fashioned in what at the moment are the Western Cape and Jap Cape. These areas include restricted deposits from the Cretaceous.
Physique fossils from these deposits, primarily within the Jap Cape, embody a variety of dinosaurs. Amongst them are the primary dinosaur recognized within the southern hemisphere, now recognized to be a stegosaur, together with sauropods, a coelurosaurian, and younger iguanodontids.
In distinction, fossil stays from the Western Cape are uncommon. They embody just a few remoted sauropod tooth, scattered bones possible from a sauropod, and two finds close to Knysna: the theropod tooth found earlier and a part of a tibia.
Now, consideration is popping to footprints as an alternative of bones.
Dinosaurs of Knysna
The newly found tracks lie within the fashionable intertidal zone, the place they’re coated by seawater at excessive tide twice a day.
The atmosphere 132 million years in the past would have seemed very totally different from immediately’s shoreline, estuary, and developed panorama. At the moment, dinosaurs possible moved by tidal channels or alongside level bars (river seashores), surrounded by vegetation in contrast to something within the space now.
The tracks seem to have been made by a mixture of dinosaurs. These embody theropods and presumably ornithopods (each these sorts of dinosaur had been bipedal, strolling on two legs), in addition to attainable sauropods (enormous dinosaurs with very lengthy necks and really lengthy tails that had been quadrupedal, strolling on 4 legs). Theropods had been meat eaters, whereas ornithopods and sauropods had been plant eaters.
Figuring out the precise kind of dinosaur from footprints alone might be tough. Theropod and ornithopod tracks can look related, and sauropod tracks, though bigger, don’t all the time present clear toe impressions.
Due to these challenges, the researchers selected to not “over-interpret” the trackmakers. Their examine focuses on documenting the presence and abundance of dinosaur tracks from this time interval within the Brenton Formation.
Extra Discoveries Might Be Forward
The presence of early Cretaceous dinosaur tracks in each the Robberg Formation and the Brenton Formation means that extra websites should still be ready to be discovered. Different non-marine Cretaceous rock exposures exist within the Western Cape and Jap Cape.
Future systematic searches of those areas might reveal further dinosaur bones, extra tracks, and presumably traces of different historic animals.
Mark G. Dixon and Fred van Berkel of the African Centre for Coastal Palaeoscience, Nelson Mandela College, contributed to this analysis.
When you pay shut consideration to the issues I write… And I believe you do, since you’re studying this. Why else would you place your self via my writing? I digress… When you pay shut consideration to the issues I write, you might discover that I usually write about my experiences up to now and the way they form my present pondering. That is nothing novel, by the best way. Loads of individuals write concerning the previous and the way their lives made them who they’re in the present day. However I’ve been toying with some totally different sort of fiction recently.
No, you received’t discover that fiction right here, or over on Medium or Substack. It’s in a darkish nook of the web, underneath a pen title, and solely those that know me since beginning will know what that pen title is. Effectively, those that know me since beginning and/or those that can learn all of the clues I depart round on social media.
Anyway, the kind of fiction I’ve been writing is about altering the previous and the results for in the present day. You see, this was an enormous plot gap within the Again to the Future films, for my part. Within the first film, Marty McFly goes to the previous and modifications the lives of his mum or dad a lot in order that they’re a distinct household when he will get again. The bully from his dad and mom’ highschool days is now serving his dad and mom and even washing Marty’s truck. His father, George McFly, is an completed creator. Marty’s mom is not the sad spouse she appeared to be originally of the film.
However what about Marty? How did he change? Certainly, the improved lives of his dad and mom as he was rising up should have led to a distinct him ultimately. I imply, Biff (the bully) turned subservient. George Grew to become extra assertive and profitable. Lorraine is happier and more healthy. That will have made for a Marty who’s totally different in some ways if his dad and mom modified that a lot. Or did nature must do extra with who he was all through the collection of films than nurture did?
So my newest quick tales have been based mostly round my child being a time traveler and having to return in time to “put issues proper the place they as soon as went improper.” But it surely wasn’t that issues went improper within the first run of the universe. What occurs is {that a} unusual determine from my previous goes again to attempt to undo what as soon as went proper, making it improper, and my daughter has to place make it proper once more.
And no, I’m not enjoying the previous scifi trope of my daughter disappearing from existence if she doesn’t change issues. She nonetheless will get to exist in some kind, and she or he involves the belief that the Novikov self-consistency precept is a factor. Issues will all the time occur the best way they’re purported to, even when issues up to now are modified. It’s onerous to clarify. You must learn the tales.
All of this has led me to consider how we people get hung up on the previous and may’t appear to let it go. Positive, we would cope with the previous to the purpose that we’re not drowning in dread over how issues might have been had we not completed sure issues. Or we’re reminded of how we damage individuals after we didn’t imply to, pushing them away. However these individuals and experiences sill creep up and make us really feel disagreeable emotions.
For some individuals, these issues up to now come again and make them interact in self-harming habits. I’ve seen it in my circle of relatives, and amongst my associates. And I’ve seen it take a life. Tough stuff.
For me, these emotions concerning the previous manifest themselves extra in a way of asking “what if?” What if I don’t “dodge these bullets”? And, to be trustworthy, I don’t even like occupied with it as a result of so many issues result in not assembly my spouse, to my daughter not being born. That’s unimaginable to me, and I even have nightmares about it. In some desires, I’ve to ensure I do all the things precisely the identical as I did so I get to fulfill my spouse and have her fall in love with me.
It wasn’t simple the primary time round, and it wouldn’t be simple to do once more. If I believed in luck, I’d say that it was dumb luck to simply get right here now.
Apart from coping with the previous and accepting them — and/or having a full lobotomy — I don’t understand how we people with our huge brains can get away from the previous. It has introduced us to who we’re, whether or not we prefer it or not. And, not like the tales I write about my time-traveling daughter, there’s nothing we will do about it.
Or is there?
I suppose we will settle for the previous and transfer on. That’s one factor we can do, and ought to do. And we will additionally be taught from our previous errors and never repeat them (as an alternative, making entire new errors). And we will, in fact, restore the issues we broke as greatest we will.
That’s, in fact, in the event you care to restore them in any respect.
Like this:
LikeLoading…
If You Appreciated This, You Would possibly Like These:
%21x is a Stata show format, simply as are %f, %g, %9.2f, %td, and so forth. You could possibly put %21x on any variable in your dataset, however that’s not its objective. Moderately, %21x is to be used with Stata’s show command for these wanting to raised perceive the accuracy of the calculations they make. We use %21x often in growing Stata.
%21x produces output that appears like this:
. show %21x 1
+1.0000000000000X+000
. show %21x 2
+1.0000000000000X+001
. show %21x 10
+1.4000000000000X+003
. show %21x sqrt(2)
+1.6a09e667f3bcdX+000
All proper, I admit that the result’s fairly unreadable to the uninitiated. The aim of %21x is to point out floating-point numbers precisely as the pc shops them and thinks about them. In %21x’s protection, it’s extra readable than how the pc actually data floating-point numbers, but it loses not one of the mathematical essence. Computer systems actually report floating-point numbers like this:
By comparability, %21x is a mannequin of readability.
The above numbers are 8-byte floating level, also referred to as double precision, encoded in binary64 IEEE 754-2008 little endian format. Little endian signifies that the bytes are ordered, left to proper, from least important to most important. Some computer systems retailer floating-point numbers in huge endian format — bytes ordered from most important to least important — after which numbers appear to be this:
No matter that, %21x produces the identical output:
. show %21x 1
+1.0000000000000X+000
. show %21x 2
+1.0000000000000X+001
. show %21x 10
+1.4000000000000X+003
. show %21x sqrt(2)
+1.6a09e667f3bcdX+000
Binary computer systems retailer floating-point numbers as a quantity pair, (a, b); the specified quantity z is encoded
z = a * 2^b
For instance,
1 = 1.00 * 2^0
2 = 1.00 * 2^1
10 = 1.25 * 2^3
The quantity pairs are encrypted within the bit patterns, comparable to 00111111…01, above.
I’ve written the elements a and b in decimal, however for causes that may turn out to be clear, we have to protect the important binaryness of the pc’s quantity. We might write the numbers in binary, however they are going to be extra readable if we signify them in base-16:
base-10
base-16 floating level
1
=
1.00 * 2^0
2
=
1.00 * 2^1
10
=
1.40 * 2^3
“1.40?”, you ask, wanting on the final row, which signifies 1.40*2^3 for decimal 10.
The interval in 1.40 isn’t a decimal level; it’s a hexadecimal level. The primary digit after the hexadecimal level is the quantity for 1/16ths, the following is for 1/(16^2)=1/256ths, and so forth. Thus, 1.40 hexadecimal equals 1 + 4*(1/16) + 0*(1/256) = 1.25 in decimal.
And that’s the way you learn %21x values +1.0000000000000X+000, +1.0000000000000X+001, and +1.4000000000000X+003. To wit,
base-10
base-16 floating level
%21x
1
=
1.00 * 2^0
=
+1.0000000000000X+000
2
=
1.00 * 2^1
=
+1.0000000000000X+001
10
=
1.40 * 2^3
=
+1.4000000000000X+003
The mantissa is proven to the left of the X, and, to the correct of the X, the exponent for the two. %21x is nothing greater than a binary variation of the %e format with which we’re all acquainted, for instance, 12 = 1.20000e+01 = 1.2*10^1. It’s such an apparent generalization that one would guess it has existed for a very long time, so excuse me after I point out that we invented it at StataCorp. If I weren’t so humble, I might emphasize that this human-readable approach of representing binary floating-point numbers preserves practically each side of the IEEE floating-point quantity. Being humble, I’ll merely observe that 1.40x+003 is extra readable than 4024000000000000.
Now that you understand how to learn %21x, let me present you ways you would possibly use it. %21x is especially helpful for analyzing precision points.
For example, the dice root of 8 is 2; 2*2*2 = 8. And but, in Stata, 8^(1/3) isn’t equal to 2:
. show 8^(1/3)2
. assert 8^(1/3) == 2
assertion is fake
r(9) ;
. show %20.0g 8^(1/3)
1.99999999999999978
In %21x format, nevertheless, we will see that the error is just one bit:
. show %21x 8^(1/3)
+1.fffffffffffffX+000
. show %21x 2
+1.0000000000000X+001
I want the reply for 8^(1/3) had been +1.0000000000001X+000, as a result of then the one-bit error would have been apparent to you. As a substitute, reasonably than being a bit too giant, the precise reply is a bit too small — one bit too small to be actual — so we find yourself with +1.fffffffffffffX+000.
One bit off means being off by 2^(-52), which is 2.220e-16, and which is the quantity we noticed once we displayed in base-10 8^(1/3)-2. So %21x didn’t reveal something we couldn’t have found out in different methods. The character of the error, nevertheless, is extra apparent in %21x format than it’s in a base-10 format.
On Statalist, the purpose typically comes up that 0.1, 0.2, …, 0.4, 0.6, …, 0.9, 0.11, 0.12, … haven’t any actual illustration within the binary base that computer systems use. That turns into apparent with %21x format:
. show %21x 0.1
+1.999999999999aX-004 . show %21x 0.2 +1.999999999999aX-003. ...
0.5 does have a precise illustration, in fact, as do all of the unfavourable powers of two:
. show %21x 0.5 // 1/2
+1.0000000000000X-001
. show %21x 0.25 // 1/4
+1.0000000000000X-002
. show %21x 0.125 // 1/8
+1.0000000000000X-003
. show %21x 0.0625 // 1/16
+1.0000000000000X-004
. ...
Integers have actual representations, too:
. show %21x 1
+1.0000000000000X+000
. show %21x 2
+1.0000000000000X+001
. show %21x 3
+1.8000000000000X+001
. ...
. show %21x 10
+1.4000000000000X+003
. ...
. show %21x 10786204
+1.492b380000000X+017
. ...
%21x is an effective way of turning into accustomed to base-16 (equivalently, base-2), which is price doing in case you program base-16 (equivalently, base-2) computer systems.
Let me present you one thing helpful that may be finished with %21x.
A programmer at StataCorp has carried out a brand new statistical command. In 4 examples, this system produces the next outcomes:
Do you see what I see? It’s all these zeros. In randomly drawn issues, it will be unlikely that there could be all zeros on the finish of every consequence. What is probably going is that the outcomes had been one way or the other rounded, and certainly they had been. The rounding on this case was as a consequence of utilizing float (4-byte) precision inadvertently. The programmer forgot to incorporate a double within the ado-file.
And that’s a method %21x is used.
I’m regularly harping on programmers at StataCorp that if they’ll program binary computer systems, they should assume in binary. I am going ballistic after I see a comparability that’s coded as “if (abs(x-y)<1e-8) …” in an try to take care of numerical inaccuracy. What sort of quantity is 1e-8? Nicely, it’s this sort of quantity:
. show %21x 1e-8
+1.5798ee2308c3aX-01b
Why put the pc to all that work, and precisely what number of digits are you, the programmer, making an attempt to disregard? Moderately than 1e-8, why not use the “good” numbers 7.451e-09 or 3.725e-09, which is to say, 1.0x-1b or 1.0x-1c? In case you try this, then I can see precisely what number of digits you’re ignoring. In case you code 1.0x-1b, I can see you’re ignoring 1b=27 binary digits. In case you code 1.0x-1c, I can see you’re ignoring 1c=28 binary digits. Now, what number of digits do that you must ignore? How imprecise do you actually assume your calculation is? By the way in which, Stata understands numbers comparable to 1.0x-1b and 1.0x-1c as enter, so you possibly can sort the exact quantity you need.
As one other instance of pondering in binary, a StataCorp programmer as soon as described a calculation he was making. At one level, the programmer wanted to normalize a quantity in a specific approach, and so calculated x/10^trunc(log10(x)), and held onto the ten^trunc(log10(x)) for denormalization later. Dividing by 10, 100, and many others., could also be straightforward for us people, however it’s not straightforward in binary, and it may end up in very small quantities of dreaded round-off error. And why even trouble to calculate the log, which is an costly operation? “Bear in mind,” I stated, “how floating-point numbers are recorded on a pc: z = a*2^b, the place 0 < = |a| < 2. Writing in C, it’s straightforward to extract elements. In reality, isn’t a quantity normalized to be between 0 and a pair of even higher in your functions?” Sure, it turned out it was.
Even I generally neglect to assume in binary. Simply final week I used to be engaged on an issue and Alan Riley instructed an answer. I believed some time. “Very intelligent,” I stated. “Recasting the issue in powers of two will do away with that divide that prompted half the issue. Even so, there’s nonetheless the pesky subtraction.” Alan checked out me, imitating a glance I so typically give others. “In binary,” Alan patiently defined to me, “the distinction you want is the final 19 bits of the unique quantity. Simply masks out the opposite digits.”
At this level, a lot of you could wish to cease studying and go off and play with %21x. In case you play with %21x lengthy sufficient, you’ll finally study the connection between numbers recorded as Stata floats and as Stata doubles, and you could uncover one thing you assume to be an error. I’ll talk about that subsequent week in my subsequent weblog posting.
If you wish to be taught agent engineering by doing as an alternative of simply studying about it, the easiest way remains to be to fork actual repos, run them regionally, and alter them to your personal use. That is the place the true studying occurs. I’ve hand-picked the very best 10 ones, the tasks which can be each helpful and well known, so you possibly can see how agent apps are being constructed as we speak. So, let’s get began.
# 1. OpenClaw
OpenClaw (~343k ⭐) is the one I’d level to first if you wish to see what the subsequent wave of non-public AI assistants would possibly appear like. It’s constructed as a private assistant that runs by yourself gadgets and connects to the instruments folks already use, like WhatsApp, Telegram, Slack, Discord, Sign, and iMessage. What makes it attention-grabbing is that it isn’t only a easy chat demo. It looks like an actual assistant product, with multi-channel help, voice options, and a broader ecosystem round expertise and management. If you’d like a repo that feels near an actual agent system, it is a robust place to start out.
# 2. OpenHands
OpenHands (~70k ⭐) is a good repo to fork in case your essential curiosity is coding brokers. It’s constructed round AI-driven improvement and now has a wider ecosystem round it, together with cloud, docs, CLI, SDK, benchmarking, and integrations. That issues as a result of you aren’t simply taking a look at one demo. You may research the core agent, test the interface, and likewise see how the staff thinks about analysis and deployment. If you wish to construct or customise a coding assistant, this is without doubt one of the most sensible repos to be taught from.
# 3. browser-use
browser-use (~85k ⭐) is without doubt one of the most helpful tasks in order for you brokers that may truly do issues on the internet. The concept is easy: it makes web sites simpler for AI brokers to make use of, to allow them to deal with browser-based duties with much less friction. That makes it straightforward to experiment with, since lots of actual agent work leads to the browser anyway — type filling, analysis, navigation, and repetitive on-line duties. It additionally has supporting repos and examples, which makes it simpler to go from curiosity to one thing you possibly can check in an actual workflow.
# 4. DeerFlow
DeerFlow (~55k ⭐) is without doubt one of the extra attention-grabbing tasks if you wish to perceive long-horizon agent methods. It’s an open-source super-agent harness that brings collectively sub-agents, reminiscence, sandboxes, expertise, and instruments to analysis, code, and create throughout longer duties. So, it isn’t simply wrapping device calls. It’s making an attempt to handle the total construction round extra advanced agent conduct. If you wish to see how fashionable agent methods are being constructed round reminiscence, coordination, and extensibility, it is a very helpful repo to fork.
# 5. CrewAI
CrewAI (~48k ⭐) remains to be one of many best repos to know in order for you multi-agent orchestration with out an excessive amount of complexity. It’s a quick, versatile framework for multi-agent automation, and it’s constructed independently fairly than on high of LangChain. The psychological mannequin is easy, the setup is approachable, and the docs and examples are pleasant sufficient for learners. If you’d like a Python-first repo which you can fork and switch into one thing helpful, CrewAI nonetheless deserves a spot close to the highest.
# 6. LangGraph
LangGraph (~28k ⭐) is the repo to review if you wish to perceive the engineering aspect of brokers, not simply the flashy demo aspect. LangChain describes it as a low-level orchestration framework for long-running, stateful, controllable brokers. It pushes you to suppose by way of graphs, state, management move, and resilience. It’s particularly helpful if you wish to transfer past easy prompt-plus-tool-call methods and perceive how extra critical agent runtimes are put collectively. It might not really feel as fast to choose up as another repos, but it surely teaches quite a bit.
# 7. OpenAI Brokers SDK
The OpenAI Brokers SDK (~20k ⭐) is an efficient choice in order for you one thing light-weight however nonetheless fashionable. It’s constructed as a compact framework for multi-agent workflows, and the docs current it as a production-ready path with a small set of helpful constructing blocks. You get instruments, handoffs, periods, tracing, and realtime patterns with out having to wade via an enormous framework. In the event you like easy surfaces and direct management, this is without doubt one of the higher starter repos to discover.
# 8. AutoGen
AutoGen (~56k ⭐) remains to be one of the essential repos within the multi-agent house. Microsoft frames it as a programming framework for agentic AI, and the docs go additional into enterprise workflows, analysis collaboration, and distributed multi-agent purposes. It belongs in this type of checklist as a result of there’s a lot to be taught from it. The orchestration concepts, agent dialog patterns, and framework design are all price finding out. It might not be the only start line for everybody, however it’s nonetheless one of the influential tasks within the class.
# 9. GPT Researcher (~26k ⭐)
GPT Researcher is a good selection if you wish to research a deep-research agent as an alternative of a common framework. It’s an autonomous agent for deep analysis utilizing any giant language mannequin (LLM) supplier, and its surrounding materials exhibits the way it handles multi-agent analysis and report technology. This offers you one clear workflow to review from begin to end. You may see planning, shopping, supply gathering, synthesis, and reporting multi functional place. If you’d like one thing concrete fairly than summary, this is without doubt one of the most forkable repos on the checklist.
# 10. Letta
Letta (~22k ⭐) stands out as a result of it places reminiscence and state on the heart of the agent design. The repo describes it as a platform for constructing stateful brokers with superior reminiscence that may be taught and enhance over time. This is a vital angle as a result of lots of agent repos focus totally on orchestration. Letta widens the image. It’s a good repo to discover in order for you brokers that persist, bear in mind, and evolve as an alternative of beginning recent each time. For memory-focused agent work, it is without doubt one of the extra attention-grabbing tasks to fork as we speak.
# Wrapping Up
All ten are price cloning, however they educate various things when you truly run them and begin altering the code. That’s the place the true studying begins.
Kanwal Mehreen is a machine studying engineer and a technical author with a profound ardour for information science and the intersection of AI with medication. She co-authored the book “Maximizing Productiveness with ChatGPT”. As a Google Era Scholar 2022 for APAC, she champions range and educational excellence. She’s additionally acknowledged as a Teradata Variety in Tech Scholar, Mitacs Globalink Analysis Scholar, and Harvard WeCode Scholar. Kanwal is an ardent advocate for change, having based FEMCodes to empower girls in STEM fields.
A couple of week in the past at TrailblazerDX, Salesforce’s developer convention, Salesforce introduced a headless structure that places brokers within the driver’s seat of purposes and the way individuals work together with them.
After reviewing the announcement, I used to be left with many questions — questions I am listening to from CIO buddies as properly: Ought to CIOs rethink their strategy to their present utility stack? Can buy plans nonetheless embrace seat-based subscriptions? How does Salesforce intend to win and differentiate as apps more and more grow to be databases orchestrated by brokers automating duties on behalf of people? And the way does Salesforce forestall market disintermediation in an open agentic world?
With these questions in hand, I reached out to a senior Salesforce chief for a direct dialog. I used to be lucky to get an hour with John Kucera, senior vp of product administration, who answered every query with spectacular depth and readability.
Transferring from techniques of file to techniques of engagement
A couple of years in the past, writer and enterprise capitalist Geoffrey Moore introduced that enterprise software program was shifting from techniques of file to techniques of engagement — that’s, from purposes that retailer and course of information to those who allow interplay and expertise round that information. Moore clearly had an amazing crystal ball — however agentic AI is altering issues even additional.
My first query was on behalf of my son James, a gross sales supervisor: Is Salesforce’s graphical person interface going away?
Kucera responded that there’ll all the time be some type of a graphical UI. Nonetheless, he added, “we consider that we’re coming into an agent-first world,” and for that cause, “we’re going to drive an agent-centric expertise.”
Differentiating in a headless world
Kucera framed Salesforce’s open strategy as a continuation of its heritage, not a departure. “We’ve all the time been open and we’ll proceed to be open. Being so is highly effective. Salesforce is the primary API on the market — we found this once we purchased MuleSoft.” Having labored at Boomi, I can affirm simply how central connectivity to Salesforce is as an API.
When requested whether or not Salesforce dangers changing into mere middleware or only a system of file, Kucera pushed again. Sure, Salesforce is a superb system of file, however it’s greater than that. He implied that Salesforce’s area information will stay a differentiator in an agentic world, as long as the corporate stays open in the way it operates. “We’re evolving this manner as a result of we’re a buyer firm, and prospects are going to need selection and the openness that we offer.”
At that time, he slipped in a line I’ve heard from different distributors defending their core franchise: “It’s onerous to vibe-code our core asset.” He adopted it with, “It’ll be onerous to be pretty much as good as we’re going to be at brokers.” We’ll see.
How CIOs are rethinking the appliance stack
Many CIOs are slowing the extension of multiyear SaaS contracts, with some shifting to one-year phrases. Towards that backdrop, I requested whether or not that is altering how they handle their utility portfolios. Kucera didn’t dodge the query — he acknowledged that change is already underway. As brokers tackle extra work, the standard per-seat subscription mannequin makes much less sense. In response, he stated Salesforce is rethinking the way it costs and monetizes its choices.
He emphasised the necessity for flexibility. Whereas a transition is clearly coming, Kucera recommended it will not be as disruptive as the sooner shift from licenses to subscriptions. That stated, he acknowledged the chance of near-term income stress in the course of the transition. For now, his steerage to CIOs is pragmatic: proceed buying subscriptions whereas the market evolves.
How will operationalizing brokers have an effect on work?
Kucera is evident {that a} human transition is already underway. In core CRM areas like help, work is shifting up the stack — people are shifting from Tier 1 to Tier 2 and Tier 3, whereas brokers tackle extra frontline exercise. He additionally argues that brokers are starting to reshape the gross sales course of itself. On condition that, I requested what the trail ahead seems like. His framing: a development from steps, to duties, to processes, and finally to broader enterprise capabilities.
He put it this manner: early generative AI use instances targeted on automating particular person steps — easy actions many individuals now carry out with an LLM. That shortly advanced into duties, corresponding to retrieving pricing information or resetting passwords. Now, the main target is on processes — multi-step, multi-person workflows like mortgages or insurance coverage claims. The subsequent stage, he suggests, is coordinating a number of processes into bigger, domain-level capabilities. That is what McKinsey is describing in Rewired Model 2 — and needs to be scary to firms like Appian.
Whereas Salesforce has lengthy been a frontrunner in low-code platforms, Kucera believes the corporate is able to transfer into these bigger course of domains. He describes himself as “a course of man” and sees this as a pure evolution. At this time, processes are sometimes framed as both customer- or employee-centric, however the actual alternative lies in orchestrating throughout them.
That raises the apparent query: will Salesforce increase into end-to-end processes like order-to-cash that minimize throughout conventional ERP boundaries dominated by SAP and Oracle? Kucera’s reply was direct — sure.
With their MCP [Model Context protocol] server strategy, he believes they’ll tackle these cross-system challenges and allow prospects to take motion throughout techniques, not simply inside them.
What about Anthropic and Open AI?
Once I requested whether or not Salesforce was frightened about Anthropic and OpenAI, Kucera was assured.
“Anthropic is many issues to us — a buyer, a companion, a benefactor, a hosted resolution, and sure, for sure issues, a competitor,” he stated. He was unwilling, nevertheless, to foretell a lot additional out than just a few years.
So how does Salesforce intend to win as apps more and more grow to be databases run by brokers automating duties on behalf of people? Kucera pressured that one measurement doesn’t match all, and that openness to the broader ecosystem is important.
On the query of market disintermediation, he pointed to Agentforce’s early traction as a cause to not fear — a minimum of for now. However he was candid, channeling one thing of a Doc Brown second from Again to the Future: how software program morphs and modifications sooner or later is unwritten.
He stated that vibe-coding by itself will not reshape enterprise software program, and in addition pressured that it’s nonetheless early days. From there, he leaned into Salesforce’s present place. Agentforce is already shifting past pilots — it’s, he stated, the fastest-growing product platform in Salesforce historical past.
“We’re going to win by persevering with to disrupt ourselves. To win, we have now to innovate quicker and make adoption potential for extra of our prospects.” The sting, in his telling, lies in being enterprise-grade, enabling profitable adoption at scale, and serving to managers transition into agent managers — a functionality he sees as foundational to constructing what he known as actually agentic firms.
What does this imply to CIOs?
CIOs are proper to be cautious. Salesforce, like many distributors, is in the midst of a big transition. To its credit score, it’s disrupting its personal mannequin and shifting towards the openness the market more and more expects. On the identical time, developments like “vibe coding” are unlikely to exchange the core code and structure that underpin its franchise.
The open query is what this implies for Salesforce’s long-term place. As Rita McGrath has argued, firms at inflection factors should “see round corners.” Salesforce seems to be doing simply that. However capital markets have cause to be cautious. As McGrath notes, digital applied sciences can basically reshape how worth chains are structured and managed.
That creates a paradox: Salesforce can execute properly — embracing brokers, openness, and new pricing fashions — and nonetheless discover its position within the ecosystem shifting as the principles of the sport change. It’s just too early to name. Count on this to stay an evolving story price shut consideration within the months forward.
Hackers are actively exploiting a essential vulnerability within the Breeze Cache plugin for WordPress that permits importing arbitrary recordsdata on the server with out authentication.
The safety subject is tracked as CVE-2026-3844 and has been leveraged in additional than 170 exploitation makes an attempt by the Wordfence safety resolution for the WordPress ecosystem.
The Breeze Cache WordPress caching plugin from Cloudways has greater than 400,000 energetic installations and is designed to enhance efficiency and loading pace by decreasing web page load frequency by means of caching, file optimization, and database cleanup.
The vulnerability obtained a essential severity rating of 9.8 out of 10 and was found and reported by safety researcher Hung Nguyen (bashu).
Researchers at WordPress safety firm Defiant, the developer of Wordfence, say that the issue stems from lacking file-type validation within the ‘fetch_gravatar_from_remote’ operate.
This enables an unauthenticated attacker to add arbitrary recordsdata to the server, which may result in distant code execution (RCE) and full web site takeover.
Nonetheless, profitable exploitation is feasible provided that the “Host Information Regionally – Gravatars” add-on is turned on, which isn’t the default state, the researchers say.
CVE-2026-3844 impacts all Breeze Cache variations as much as and together with 2.4.4. Cloudways mounted the flaw in model 2.4.5, launched earlier this week.
Based on statistics from WordPress.org, the plugin has had roughly 138,000 downloads for the reason that launch of the newest model. It’s unclear what number of web sites are weak, although, as a result of there isn’t any knowledge on the quantity which have the Host Information Regionally – Gravatars enabled.
Given the energetic exploitation standing, web site house owners/admins who depend on Breeze Cache to spice up efficiency are really useful to improve to the newest model of the plugin as quickly as potential or briefly disable it.
If upgrading is at the moment not potential, admins ought to at the least disable the “Host Information Regionally – Gravatars.”
AI chained 4 zero-days into one exploit that bypassed each renderer and OS sandboxes. A wave of recent exploits is coming.
On the Autonomous Validation Summit (Might 12 & 14), see how autonomous, context-rich validation finds what’s exploitable, proves controls maintain, and closes the remediation loop.
The U.S. Meals and Drug Administration (FDA) has permitted the first-ever gene remedy for inherited deafness.
The remedy, referred to as Otarmeni, is permitted to deal with a type of listening to loss attributable to mutations within the OTOF gene, which codes for a protein referred to as otoferlin. Cells within the internal ear want otoferlin to translate vibrations into indicators that may be interpreted by the mind. When individuals carry two faulty copies of the OTOF gene — one from every mum or dad — this line of communication between the internal ear and mind is reduce, leading to severe-to-profound listening to loss.
Otarmeni is a one-time therapy that makes use of innocent viruses to ship working copies of OTOF into the ear. In a trial together with 20 individuals, 16 confirmed improved listening to inside six months, and one extra individual confirmed enchancment inside a yr of therapy.
Some individuals improved to the purpose that they may hear whispers, whereas all of the individuals who responded to the remedy reached a stage of listening to that doesn’t sometimes require cochlear implantation — that means the usage of a tool to bypass the internal ear and restore some listening to. Many individuals with this type of listening to loss get cochlear implants, however the implants do not completely replicate pure listening to and require upkeep over time.
Primarily based on the trial outcomes, Otarmeni was permitted for each kids and adults with OTOF-related listening to loss, the remedy’s maker Regeneron introduced Thursday (April 23). The corporate has stated that the therapy itself shall be free to sufferers within the U.S., not together with out-of-pocket administration prices that is likely to be dictated by a affected person’s physician and insurance coverage.
“The FDA approval of this gene remedy is a landmark second for the sphere and, most significantly, for sufferers,” stated Zheng-Yi Chen, an affiliate scientist on the Eaton-Peabody Laboratories at Mass Eye and Ear and an affiliate professor at Harvard Medical College. Chen has been concerned in a trial of a comparable gene remedy for OTOF-related deafness in China.
Chen stated the information from the trial “convincingly exhibit each security and efficacy.” The approval course of was very quick, he added, taking lower than three years from when the primary affected person was dosed in 2023. (Otarmeni was permitted by a particular FDA “quick observe” course of, partially as a result of there have been no present remedies for OTOF-related listening to loss that handle its underlying trigger.)
Get the world’s most fascinating discoveries delivered straight to your inbox.
The velocity “underscores each the sturdy scientific outcomes and the pressing, unmet medical want for youngsters with OTOF-mediated congenital listening to loss,” Chen advised Stay Science. “We’re extremely inspired by this milestone, which can function a catalyst to speed up the event of future genetic therapies for listening to loss.”
OTOF-related listening to loss impacts about 50 newborns per yr within the U.S. The brand new remedy is permitted for these with two faulty copies of the OTOF gene and no historical past of utilizing a cochlear implant within the ear meant for therapy. Putting in these implants damages the internal ear, so a gene remedy is unlikely to work; however those that have just one cochlear implant can get the gene remedy within the reverse ear.
Sufferers searching for the remedy should even have intact outer hair cells, that are particular cells within the ear that act as amplifiers, growing the motion of the eardrum in response to sound.
A world trial testing Otarmeni continues to be ongoing and is recruiting kids below 18 within the U.S., United Kingdom, Spain, Germany and Japan.
“I’ve witnessed firsthand my trial participant responding to their mom’s voice, dancing to music and interacting with the world, and these moments are actually attainable for extra kids born with this particular type of listening to loss,” Dr. A. Eliot Shearer, an otolaryngologist at Boston Kids’s Hospital, affiliate professor at Harvard Medical College and trial runner, stated within the Regeneron assertion.
This text is for informational functions solely and isn’t meant to supply medical recommendation.
that “teams are remarkably clever, and are sometimes smarter than the neatest individuals in them.”He was writing about decision-making, however the identical precept applies to classification: get sufficient individuals to explain the identical phenomenon and a taxonomy begins to emerge, even when no two individuals phrase it the identical approach. The problem is extracting that sign from the noise.
I had a number of thousand rows of free-text information and wanted to do precisely that. Every row was a brief natural-language annotation explaining why an automatic safety discovering was irrelevant, which capabilities to make use of for a repair, or what coding practices to comply with. One particular person wrote “that is take a look at code, not deployed wherever.” One other wrote “non-production atmosphere, protected to disregard.” A 3rd wrote “solely runs in CI/CD pipeline throughout integration exams.” All three meant the identical factor, however no two shared greater than a phrase or two.
The taxonomy was in there. I simply wanted the appropriate device to extract it. Conventional clustering and key phrase matching couldn’t deal with the paraphrase variation, so I attempted one thing I hadn’t seen mentioned a lot: utilizing a domestically hosted LLM as a zero-shot classifier. This weblog publish explores the way it carried out, the way it works, and a few suggestions for utilizing and deploying these methods your self.
Why conventional clustering struggles with brief free-text
Customary unsupervised clustering works by discovering mathematical proximity in some characteristic area. For lengthy paperwork, that is often effective. Sufficient sign exists in phrase frequencies or embedding vectors to kind coherent teams. However brief, semantically dense textual content breaks these assumptions in a number of particular methods.
Embedding similarity conflates completely different meanings. “This key’s solely utilized in improvement” and “This API key’s hardcoded for comfort” produce related embeddings as a result of the vocabulary overlaps. However one is a couple of non-production atmosphere and the opposite is about an intentional safety tradeoff. Ok-means or DBSCAN can’t distinguish them as a result of the vectors are too shut.
Matter fashions floor phrases, not ideas.Latent Dirichlet Allocation (LDA) and its variants discover phrase co-occurrence patterns. When your corpus consists of one-sentence annotations, the phrase co-occurrence sign is simply too sparse to kind significant matters. You get clusters outlined by “take a look at” or “code” or “safety” moderately than coherent themes.
Regex and key phrase matching can’t deal with paraphrase variation. You possibly can write guidelines to catch “take a look at code” and “non-production,” however you’d miss “solely used throughout CI,” “by no means deployed,” “development-only fixture,” and dozens of different phrasings that every one categorical the identical underlying concept.
The frequent thread: these strategies function on floor options (tokens, vectors, patterns) moderately than semantic which means. For classification duties the place which means issues greater than vocabulary, you want one thing that understands language.
LLMs as zero-shot classifiers
The important thing perception is straightforward: as an alternative of asking an algorithm to find clusters, outline your candidate classes based mostly on area information and ask a language mannequin to categorise every entry.
This works as a result of LLMs course of semantic which means, not simply token patterns. “This key’s solely utilized in improvement” and “Non-production atmosphere, protected to disregard” comprise nearly no overlapping phrases, however a language mannequin understands they categorical the identical concept. This isn’t simply instinct. Chae and Davidson (2025) in contrast 10 fashions throughout zero-shot, few-shot, and fine-tuned coaching regimes and located that giant LLMs in zero-shot mode carried out competitively with fine-tuned BERT on stance detection duties. Wang et al. (2023) discovered LLMs outperformed state-of-the-art classification strategies on three of 4 benchmark datasets utilizing zero-shot prompting alone, no labeled coaching information required.
The setup has three parts:
Candidate classes. An inventory of mutually unique classes outlined from area information. In my case, I began with about 10 anticipated themes (take a look at code, enter validation, framework protections, non-production environments, and so forth.) and expanded to twenty candidates after reviewing a pattern.
A classification immediate. Structured to return a class label and a short motive. Low temperature (0.1) for consistency. Brief max output (100 tokens) since we solely want a label, not an essay.
A neighborhood LLM. I used Ollama to run fashions domestically. No API prices, no information leaving my machine, and quick sufficient for hundreds of classifications.
Right here’s the core of the classification immediate:
CLASSIFICATION_PROMPT = """
Classify this textual content into one in all these themes:
{themes}
Textual content:
"{content material}"
Reply with ONLY the theme quantity and identify, and a short motive.
Format: THEME_NUMBER. THEME_NAME | Motive
Classification:
"""
And the Ollama name:
response = ollama.generate(
mannequin="gemma2",
immediate=immediate,
choices={
"temperature": 0.1, # Low temp for constant classification
"num_predict": 100, # Brief response, we simply want a label
}
)
Two issues to notice. First, the temperature setting issues. At 0.7 or larger, the identical enter can produce completely different classifications throughout runs. At 0.1, the mannequin is almost deterministic, which helps clean classification. Second, limiting num_predict retains the mannequin from producing explanations you don’t want, which accelerates throughput considerably.
Constructing the pipeline
The total pipeline has three steps: preprocess, classify, analyze.
Preprocessing strips content material that provides tokens with out including classification sign. URLs, boilerplate phrases (“For extra data, see…”), and formatting artifacts all get eliminated. Frequent phrases get normalized (“false constructive” turns into “FP,” “manufacturing” turns into “prod”) to cut back token variation. Deduplication by content material hash removes actual repeats. This step decreased my token finances by roughly 30% and made classification extra constant.
Classification runs every entry via the LLM with the candidate classes. For ~7,000 entries, this took about 45 minutes on a MacBook Professional utilizing Gemma 2 (9B parameters). I additionally examined Llama 3.2 (3B), which was quicker however barely much less exact on edge circumstances the place two classes had been shut. Gemma 2 dealt with ambiguous entries with noticeably higher judgment.
One sensible concern: lengthy runs can fail partway via. The pipeline saves checkpoints each 100 classifications, so you may resume from the place you left off.
Evaluation aggregates the outcomes and generates a distribution chart. Right here’s what the output regarded like:
Distribution of Semgrep “Reminiscences” as assigned by the LLM clustering train. Picture used with permission.
The chart tells a transparent story. Over 1 / 4 of all entries described code that solely runs in non-production environments. One other 21.9% described circumstances the place a safety framework already handles the danger. These two classes alone account for half the dataset, which is the type of perception that’s arduous to extract from unstructured textual content every other approach.
When this strategy is just not the appropriate match
This method works greatest in a selected area of interest: medium-scale datasets (a whole bunch to tens of hundreds of entries), semantically advanced textual content, and conditions the place you could have sufficient area information to outline candidate classes however no labeled coaching information.
It’s not the appropriate device when:
your classes are keyword-defined (simply use regex),
when you could have labeled coaching information (prepare a supervised classifier; it’ll be quicker and cheaper),
whenever you want sub-second latency at scale (use embeddings and a nearest-neighbor lookup),
or whenever you genuinely don’t know what classes exist. On this case, run exploratory subject modeling first to develop instinct, then change to LLM classification as soon as you may outline classes.
The opposite constraint is throughput. Even on a quick machine, classifying one entry per fraction of a second means 7,000 entries takes near an hour. For datasets above 100,000 entries, you’ll need an API-hosted mannequin or a batching technique.
Different purposes price attempting
The pipeline generalizes to any downside the place you could have unstructured textual content and wish structured classes.
Buyer suggestions. NPS responses, help tickets, and survey open-ends all endure from the identical downside: diversified phrasing for a finite set of underlying themes. “Your app crashes each time I open settings” and “Settings web page is damaged on iOS” are the identical class, however key phrase matching received’t catch that.
Bug report triage. Free-text bug descriptions may be auto-categorized by part, root trigger, or severity. That is particularly helpful when the particular person submitting the bug doesn’t know which part is accountable.
Code intent classification. That is one I haven’t tried but however discover compelling: classifying code snippets, Semgrep guidelines, or configuration guidelines by objective (authentication, information entry, error dealing with, logging). The identical method applies. Outline the classes, write a classification immediate, run the corpus via an area mannequin.
Getting began
The pipeline is easy: outline your classes, write a classification immediate, and run your information via an area mannequin.
The toughest half isn’t the code. It’s defining classes which are mutually unique and collectively exhaustive. My recommendation: begin with a pattern of 100 entries, classify them manually, discover which classes you retain reaching for, and use these as your candidate checklist. Then let the LLM scale the sample.
I used this system as a part of a bigger evaluation on how safety groups remediate vulnerabilities. The classification outcomes helped floor which forms of safety context are commonest throughout organizations, and the chart above is likely one of the outputs from that work. Should you’re within the safety angle, the total report is offered at that hyperlink.
Error monitoring has developed far past catching stack traces after one thing breaks. In fashionable software program groups, the perfect error monitoring instruments for builders assist determine crashes in actual time, group related points intelligently, floor wealthy debugging context, join failures to code adjustments, and scale back the time between detection and backbone. That issues much more now that groups are delivery quicker, deploying extra usually, and counting on AI-assisted workflows that may improve each supply velocity and operational complexity.
For a lot of groups, error monitoring is not a slim debugging utility. It’s a part of the manufacturing suggestions loop. A helpful platform ought to assist builders reply sensible questions shortly: Which errors are new? Which of them have an effect on actual customers? Which launch launched the difficulty? Is the issue remoted to at least one surroundings, one system sort, one service, or one workflow? And in an age of AI-assisted growth, one other query issues too: how do you join runtime points again to the code and programs accountable for them?
That’s the reason this record consists of each conventional error monitoring leaders and some instruments that sit barely adjoining to the class however nonetheless matter for developer-led problem detection. Some are strongest in net and backend environments. Some are higher recognized for cell crash reporting. Some emphasize open-source flexibility. And a few, like Hud, push the class towards runtime intelligence for contemporary manufacturing environments.
Why error monitoring instruments matter extra in fashionable growth
Why builders want greater than logs
Logs nonetheless matter, however logs alone hardly ever give builders the readability they want when one thing breaks in manufacturing. Uncooked log streams will be noisy, fragmented, and arduous to prioritize. Error monitoring instruments enhance that by capturing exceptions, grouping repeated points, attaching context like stack traces and surroundings metadata, and serving to builders see which failures deserve rapid consideration.
This turns into particularly essential in distributed programs and fast-moving product groups. A single regression might present up in a different way throughout providers, browsers, working programs, or cell gadgets. With out a devoted error monitoring layer, builders can waste hours stitching collectively clues that ought to have been seen in minutes.
The place error monitoring matches within the engineering workflow
The strongest groups use error monitoring at a number of factors within the software program lifecycle. It helps them validate new releases, look ahead to post-deployment regressions, prioritize bugs by impression, and scale back imply time to decision. It additionally improves collaboration between engineering, SRE, QA, assist, and product groups as a result of everybody can work from a shared view of what’s failing and the way extreme it’s.
In AI-assisted growth environments, error monitoring turns into much more essential. When code is generated extra shortly, deployed extra incessantly, or reviewed below tighter time constraints, builders want a sharper manufacturing suggestions loop. That doesn’t make testing much less essential. It makes runtime problem detection extra essential.
What a powerful error monitoring platform ought to ship
Builders evaluating error monitoring instruments ought to search for greater than fundamental crash seize. A powerful platform often affords:
real-time error and exception reporting
good grouping and deduplication
helpful stack traces and debugging context
launch and deployment correlation
alerting that reduces noise as an alternative of accelerating it
assist for a number of environments, frameworks, and languages
sufficient flexibility to suit net, backend, cell, or hybrid purposes
The very best instrument is determined by your working mannequin. A cell workforce might care most about crash-free periods and system context. A backend workforce might prioritize efficiency and exception visibility. A platform workforce might care extra about problem prioritization, hint correlation, and operational consistency throughout providers.
Prime error monitoring instruments for builders
1. Hud
Hud takes a broader and extra fashionable view of error monitoring than many conventional instruments. Relatively than focusing solely on exception seize, it positions itself as a Runtime Code Sensor that streams real-time, function-level runtime information from manufacturing into AI coding instruments, with the purpose of creating AI-generated code production-safe by default. That makes it particularly related for groups that wish to perceive not simply that an issue occurred, however how stay code conduct contributed to it.
For builders, Hud issues as a result of manufacturing failures are sometimes tougher to elucidate than to detect. A spike in errors could also be straightforward to see, however understanding which code path shifted, which perform degraded, or why a launch launched sudden runtime conduct is a deeper problem. Hud is constructed round closing that hole by turning manufacturing conduct right into a richer debugging sign.
That provides it a definite place on this record. It’s not a basic problem inbox in the identical mould as conventional exception trackers. As a substitute, it expands the class by serving to builders join runtime conduct, code execution, and manufacturing security extra straight. Hud is greatest for groups that see error monitoring as a part of a wider runtime intelligence technique. In case your builders need greater than alerting and want deeper visibility into how stay code behaves, it is without doubt one of the extra differentiated choices obtainable at present.
Key factors:
Perform-level runtime visibility from manufacturing
Sturdy match for debugging code conduct, not simply capturing exceptions
Helpful for groups that need richer manufacturing context in developer workflows
2. Sentry
Sentry is without doubt one of the most recognizable names in error monitoring, and for good purpose. Its platform combines error monitoring with tracing, logs, replay, profiling, and associated debugging workflows designed to assist software program groups see errors clearly and resolve points quicker. That makes it one of many most secure selections for growth groups that need a sturdy, developer-first platform with broad language and framework protection.
Sentry’s worth comes from how successfully it turns uncooked failures into actionable points. It captures exceptions in actual time, teams recurring issues, and offers builders the context wanted to research them with out sifting by unstructured telemetry. For net and backend purposes, that usually interprets into quicker triage and extra environment friendly debugging. For cell groups, Sentry additionally supplies crash and efficiency visibility throughout supported environments.
One other energy is familiarity. Many engineering groups already know how one can work with Sentry, and the platform’s issue-centric workflow is properly suited to bug fixing, regression looking, and post-release validation. It matches each smaller groups that want a quick begin and bigger groups that need structured problem visibility throughout providers.
Key factors:
Actual-time error monitoring with sturdy developer workflows
Further visibility by tracing, logs, and profiling
Broad ecosystem assist throughout fashionable purposes
Efficient for each exception triage and ongoing stability work
3. Rollbar
Rollbar has lengthy been a powerful possibility for groups that need real-time error monitoring with clear problem grouping and helpful launch context. The corporate emphasizes that its platform alerts builders when one thing breaks, teams duplicate errors mechanically, and surfaces the precise line of code concerned. That concentrate on fast signal-to-resolution stream is precisely why it continues to matter.
For builders, Rollbar’s core energy is prioritization. Error monitoring solely turns into priceless when groups can separate noisy background failures from points that genuinely have an effect on product stability or person expertise. Rollbar helps by grouping related occasions and including the context wanted to know how usually a problem happens, the place it seems, and whether or not it correlates with a deployment.
This makes it particularly helpful for engineering groups managing frequent releases. In these environments, the important thing query is commonly not “Did an error occur?” however “Did this launch introduce a significant regression, and the way shortly can we verify it?” Rollbar’s deployment-aware workflows assist make that query simpler to reply.
Key factors:
Actual-time error alerts and automated grouping
Clear line-of-code visibility for quicker debugging
Sturdy assist for release-based problem investigation
Effectively suited to groups delivery frequent software updates
4. BugSnag
BugSnag is designed round software stability and real-time error monitoring. Its official messaging emphasizes figuring out, monitoring, and resolving app errors effectively so groups can keep reliability and enhance person satisfaction. That makes it a pure inclusion in any severe record of error monitoring instruments for builders.
One purpose BugSnag stands out is its constant energy throughout net, backend, and cell use circumstances. Many groups use it not simply to catch unhandled exceptions, however to watch software stability extra broadly. That issues as a result of builders are hardly ever fixing remoted crashes in a vacuum. They’re often attempting to know patterns: which gadgets are affected, which variations regressed, which environments are unstable, and the way the difficulty impacts total person expertise.
BugSnag’s attraction additionally comes from its readability. Builders often need an error tracker that helps them transfer shortly from “we now have a manufacturing problem” to “that is the doubtless trigger and scope.” BugSnag’s stability-oriented design helps that workflow properly, particularly for groups managing customer-facing software program the place reliability is a visual a part of product high quality.
Key factors:
Actual-time app error detection and monitoring
Sturdy give attention to software stability and reliability
Helpful throughout net, backend, and cell environments
Good match for groups that need stability insights alongside error reporting
5. Raygun
Raygun approaches error monitoring from the angle of serving to groups detect, diagnose, and resolve the problems that have an effect on finish customers. Its crash reporting and error monitoring positioning highlights detailed diagnostics and simpler replication of errors, exceptions, bugs, and crashes. That user-impact orientation is certainly one of its strongest promoting factors.
For builders, Raygun is beneficial as a result of it pushes error monitoring past technical seize and nearer to software expertise. A bug issues most when it impacts actual workflows, actual clients, or core product flows. Instruments that assist builders perceive that impression can enhance prioritization considerably. Raygun helps that by pairing diagnostic element with a broader view of software conduct.
It is usually a superb match for groups that want cross-platform visibility. Internet purposes, cell merchandise, and distributed providers all produce errors in a different way. Raygun’s design helps builders examine these points whereas maintaining the end-user impression in view.
Key factors:
Detailed diagnostics for errors, bugs, and crashes
Sturdy orientation towards actual person impression
Useful for groups that need higher problem replication and analysis
Helpful throughout fashionable net and cell software program environments
6. Honeybadger
Honeybadger combines error monitoring and software monitoring in a single streamlined interface, aiming to assist builders reply shortly and repair points in file time. That simplicity is a serious a part of its attraction. Not each workforce wants a sprawling observability stack to catch manufacturing points. Many simply want a reliable, easy platform that surfaces errors, sends helpful alerts, and supplies sufficient context to resolve bugs effectively.
For builders, Honeybadger works properly as a result of it stays targeted on sensible problem administration. It captures exceptions, helps groups perceive what modified round a deployment, and helps associated reliability workflows comparable to uptime and cron monitoring. That broader however nonetheless manageable scope makes it enticing to smaller engineering groups and product-focused growth teams.
One other profit is usability. Groups that worth velocity and readability usually want instruments which can be straightforward to purpose about throughout a stay problem. Honeybadger’s easier footprint is usually a energy in that context, particularly in comparison with platforms that require heavier setup or broader operational buy-in.
Key factors:
Error monitoring and software monitoring in a single interface
Actual-time alerts and context-rich exception visibility
Useful for uptime and cron-style reliability workflows
Sturdy match for smaller groups or easy manufacturing environments
7. Firebase Crashlytics
Firebase Crashlytics is without doubt one of the strongest crash reporting instruments for cell builders. Google describes it as a light-weight, real-time crash reporter that helps groups monitor, prioritize, and repair stability points affecting app high quality. For Android, Apple platforms, Flutter, and Unity purposes, it stays a extremely sensible selection.
Its greatest energy is mobile-specific usability. Cell groups don’t simply must know that an error occurred. They should perceive system circumstances, app variations, working system patterns, and the soundness developments that form person expertise over time. Crashlytics is constructed round that actuality, which is why it continues to be extensively adopted in app growth groups.
For builders working inside the Firebase ecosystem, the combination benefit is clear. Crash reporting turns into half of a bigger workflow that will already embrace analytics, authentication, messaging, and performance-related tooling. Even outdoors that broader ecosystem worth, Crashlytics stays compelling as a result of it’s purpose-built for the kind of stability monitoring cell groups depend on.
Key factors:
Actual-time crash and stability reporting for cell apps
Assist for Android, Apple platforms, Flutter, and Unity
Light-weight integration and robust cell developer match
Glorious for prioritizing and fixing app stability points
8. AppSignal
AppSignal is a developer-friendly monitoring platform with a strong error monitoring providing, particularly enticing to groups working with Ruby, Elixir, Node.js, Python, and JavaScript environments. Its error monitoring product emphasizes visibility into software errors and background job failures, whereas additionally linking error info with broader efficiency monitoring workflows.
That mixture is beneficial as a result of many manufacturing points stay on the intersection of code failure and software efficiency. A developer might must know not solely that an exception occurred, however whether or not it was related to a background employee, a sluggish request, or a front-end failure sample. AppSignal helps bridge these contexts with out changing into as operationally broad as some enterprise observability suites.
Its usability additionally issues. Builders usually select AppSignal as a result of it feels approachable and aligned with day-to-day engineering work. For groups that need error monitoring as a part of a coherent software monitoring workflow, relatively than as a separate instrument silo, it makes a variety of sense.
Key factors:
Error monitoring throughout backend and frontend environments
Sturdy assist for background job and software error visibility
Useful connection between errors and broader efficiency context
Good match for developer-led groups utilizing widespread fashionable frameworks
9. GlitchTip
GlitchTip is the open-source possibility on this record, and that alone makes it essential. Its documentation describes it as a platform that lets net apps ship errors as points, whereas additionally combining error monitoring and uptime monitoring in a single open-source bundle. For builders who need extra management over their tooling or want self-hosted workflows, that may be a decisive benefit.
Open-source error monitoring issues for a number of causes. Some groups wish to handle prices extra predictably. Others want stronger management over information dealing with, deployment fashions, or inside operational requirements. GlitchTip offers these groups a extra versatile path whereas nonetheless overlaying core error monitoring wants like problem seize, notification, and visibility into manufacturing issues.
For builders, the principle query is whether or not open supply comes at the price of practicality. In GlitchTip’s case, the attraction is that it goals to cowl the necessities cleanly sufficient for actual growth groups, not simply passion deployments. It’s particularly fascinating for startups, inside platforms, and engineering groups that need a substitute for extra business problem trackers.
Key factors:
Open-source error monitoring for net purposes
Combines error visibility and uptime monitoring
Helpful for groups that need extra management over internet hosting and information
Sturdy worth possibility for cost-conscious or self-managed environments
10. Bugsee
Bugsee stands out as a result of it provides richer session-level context to bug and crash reporting, particularly for cell groups. The corporate emphasizes that it lets builders see the video, community exercise, and logs that led to bugs and crashes in stay apps. That sort of context will be extraordinarily useful when builders try to breed hard-to-catch points.
In lots of debugging workflows, a stack hint is just not sufficient. Builders additionally must know what the person did, what community calls have been in flight, and what sequence of occasions led to the failure. Bugsee addresses that by capturing the trail to the bug, not simply the crash occasion itself. That makes it significantly priceless for UX-heavy cell apps, edge-case failures, and bugs which can be troublesome to breed in native testing.
It is usually helpful that Bugsee helps crash reporting with full stack hint symbolication and context-rich diagnostics in supported environments. For groups that want a extra visible and reconstructive debugging workflow, that could be a significant benefit over easier crash trackers.
Key factors:
Bug and crash reporting with video, logs, and community context
Useful for reproducing troublesome cell points
Stronger debugging context than stack traces alone
Good match for cell groups investigating user-path-dependent failures
Selecting the perfect error monitoring instruments for builders
What separates a useful gizmo from a loud one
The very best error monitoring instrument is just not the one which captures probably the most occasions. It’s the one which helps builders repair the appropriate issues quicker. Which means sturdy grouping, good context, related alerts, and a workflow that helps prioritization relatively than overwhelming groups with noise.
A helpful platform ought to make it simpler to reply:
Which points are new?
Which of them have an effect on clients probably the most?
Which launch launched the regression?
What context do builders want to breed and resolve the issue?
If the instrument can not assist reply these questions clearly, it could nonetheless gather errors, however it’s not creating sufficient engineering worth.
The best way to consider error monitoring instruments in your workforce
A sensible analysis ought to give attention to working actuality, not simply function lists.
Have a look at:
stack match – net, backend, cell, or cross-platform
developer workflow – problem grouping, triage velocity, and debugging context
deployment mannequin – managed SaaS versus self-hosted or open-source
launch visibility – whether or not the instrument helps join points to deployments
alert high quality – whether or not it reduces or will increase fatigue
pricing and scale – whether or not the product stays viable as utilization grows
Groups also needs to take into consideration maturity. A smaller workforce might profit most from a clear and easy instrument with quick setup. A bigger engineering org might want richer correlation, broader platform assist, and extra structured workflows. Cell groups might prioritize stability studies and system context. AI-assisted groups might more and more care about runtime intelligence and code-level manufacturing visibility.
FAQs:
What’s an error monitoring instrument for builders?
An error monitoring instrument helps builders seize, set up, and examine software program failures in actual time. As a substitute of relying solely on uncooked logs, these platforms group related points, connect stack traces, present surroundings particulars, and infrequently hyperlink issues to releases or affected customers. That makes debugging quicker and extra sensible. For contemporary groups, error monitoring is not only about crash assortment, however about turning manufacturing failures into clear, actionable engineering work.
Why do builders nonetheless want error monitoring in the event that they already use logs and monitoring?
Logs and monitoring are helpful, however they don’t at all times make debugging environment friendly. Logs will be noisy, and monitoring usually reveals signs with out sufficient issue-level element. Error monitoring instruments bridge that hole by isolating exceptions, grouping duplicates, and surfacing context builders can act on instantly. They assist groups transfer from “one thing is improper” to “this particular bug wants consideration,” which is why they continue to be important even in mature observability environments.
What options ought to builders prioritize when evaluating error monitoring instruments?
A very powerful options often embrace real-time reporting, good grouping, stack traces, launch monitoring, alerting, and sufficient context to breed points. Groups also needs to have a look at framework assist, cell or backend compatibility, and whether or not the instrument matches their workflow. Some builders want session replay or system information, whereas others want efficiency context or open-source deployment choices. The proper selection is determined by the place failures often occur and the way the workforce investigates them.
Are error monitoring instruments solely helpful for giant engineering groups?
No. Smaller groups usually profit much more as a result of they’ve much less time to research manufacturing points manually. A very good error monitoring instrument helps lean groups catch regressions shortly, prioritize high-impact bugs, and keep away from spending hours looking out by logs. Bigger organizations use these instruments for scale and consistency, however smaller groups use them for velocity and focus. In each circumstances, the purpose is similar: quicker decision and fewer unresolved manufacturing points.
5. Which is the perfect error monitoring instrument for builders?
Hud is the perfect error-tracking instrument on this record for builders as a result of it goes past conventional exception monitoring, bringing function-level runtime visibility into the debugging workflow. Whereas many instruments assist groups see that one thing failed, Hud is constructed to assist builders perceive how manufacturing code behaves, which makes problem detection and root-cause evaluation more practical. For contemporary groups, particularly these delivery AI-assisted code, that deeper runtime intelligence makes Hud the strongest total selection.
Which groups profit most from mobile-focused error monitoring instruments?
Cell growth groups profit probably the most as a result of app crashes are sometimes tied to system sort, working system model, app launch, community state, and person session conduct. Generic backend instruments might not seize sufficient of that context. Cell-focused platforms assist groups perceive crash developments, stability charges, and environment-specific failures extra clearly. They’re particularly priceless for product groups the place app high quality, crash-free periods, and person retention are straight tied to technical efficiency.
How usually ought to builders evaluation error monitoring dashboards and alerts?
Builders ought to deal with error monitoring as an lively workflow, not a passive archive. Crucial alerts want rapid consideration, however groups additionally profit from common opinions after deployments, throughout dash planning, and as a part of ongoing stability work. A weekly evaluation of unresolved points is commonly helpful, whereas higher-velocity groups might test dashboards day by day. The very best rhythm is determined by launch frequency, product sensitivity, and the way shortly manufacturing regressions usually have an effect on customers.
Can error monitoring instruments assist groups utilizing AI-assisted growth?
Sure, and they’re changing into extra essential in that surroundings. AI-assisted growth can improve launch velocity and scale back the time engineers spend inspecting each line of code manually. That makes manufacturing suggestions extra priceless. Error monitoring instruments assist groups catch regressions, perceive runtime failures, and join points again to code adjustments extra shortly. For groups delivery AI-assisted software program, they’re a sensible safeguard that helps velocity and reliability enhance collectively.
Macworld experiences that dummy items for the iPhone 18 Professional, Professional Max, and new iPhone Extremely have surfaced, revealing design particulars for Apple’s late-2026 lineup.
The iPhone 18 Professional will probably be barely bigger than its predecessor, whereas the Extremely mannequin is predicted to be 11mm thick when closed and lack MagSafe help.
All new fashions are anticipated to function titanium building and bigger digital camera lenses, although these early particulars from leaker sources stay unconfirmed.
One of many conventional phases of an iPhone rumor cycle is the arrival of dummy items. These non-functional early prototypes are made for design illustration and measurement comparability functions, usually by manufacturing companions and accent makers relatively than Apple itself. They’ll’t be used to run apps or something of that kind. However they will nonetheless inform us lots concerning the design of an upcoming product.
This week, for instance, the tech YouTuber Vadim Yuryev posted pictures of three new dummy items. To date, so comparatively customary. He’s obtained steel dummies of all three late-2026 iPhones: the 18 Professional, 18 Professional Max, and iPhone Fold (or iPhone Extremely, as I’ll name it for the remainder of this text). They appear largely the way in which earlier rumors have recommended they’ll, and have the anticipated design and exterior options: two rear-facing digital camera lenses on the Extremely, three on the Professional fashions, and no MagSafe on the Extremely. However Yuryev determined to not depart it there. He invited questions.
The following AMA (or Ask Me Something, from the Reddit ritual) is required studying for anybody on this 12 months’s new telephones. Listed here are among the questions, and Yuryev’s solutions.
Q: Will the brand new telephones have bigger digital camera lenses than the 17 Professional? A: Sure.
Q: What’s the thickness of the Extremely when closed? A: Precisely 11mm.
Q: So the Fold [Ultra] received’t be a unibody? Extra just like the design of the iPhone Air with polished titanium, I assume? A: Sure. What you mentioned.
Q: Do we all know [from the dummies] how a lot they’ll weigh? A: No. These are a lot heavier.
Q: Is the 18 Professional the identical measurement because the 17 Professional? Do outdated instances match? A: 0.36mm taller. 0.39mm wider. Identical thickness. Unfastened-fitting or rubber instances would possibly nonetheless match, who is aware of?
Q: It will likely be in titanium, proper? A: Sure.
Q: If the Extremely actually is just not going to have MagSafe that’s going to be the most important fail. A: Yeah I don’t assume they’ve room. Must depend on MagSafe instances.
Q: Any base [iPhone 18] dummy? A: It’s an identical to the iPhone 17 so far as I do know. In all probability only a smaller Dynamic Island and buttons moved round a bit to make sure that you must purchase a brand new case.
It isn’t clear how Yuryev is so certain about his solutions, which you’ll discover embody some details that can not be deduced from the dummies alone: the design of the baseline iPhone 18, for instance, or the supplies used for the brand new telephones. He doesn’t title a supply for the dummies themselves, or for the opposite data. So it’s most likely finest to treat these claims as unproven in the meanwhile.
In any case these aren’t the primary dummies we’ve seen for the late-2026 iPhone launches. As early as December, in truth, we bought one for the iPhone Extremely, though we should always emphasise that it was created by a 3D printing hobbyist primarily based on leaked CAD information relatively than by an organization. So possibly that one doesn’t rely.
Then in April, the prolific leaker Sonny Dickson posted photographs of dummies of the iPhone Extremely, 18 Professional, and 18 Professional Max, insisting these illustrated the ultimate sizes of these three merchandise. However this newest leak is the primary to interact with commenters’ questions in such depth. It due to this fact provides us our greatest perception but into the design and options of the late-2026 iPhones, which we at the moment anticipate to launch in September.
For all the newest data and rumors main as much as the launch, bookmark our frequently up to date information hubs: iPhone 18 and iPhone Extremely. For those who can’t wait that lengthy, decide up a cut price on the present vary with our roundup of the finest iPhone offers.