Not all AI initiatives will probably be winners.
So CIOs should apply “fail quick” ideas to their AI initiatives, deciding as rapidly as doable when a promising concept is simply not going to pan out.
That is simpler stated than performed. The MIT report “State of AI in Enterprise 2025” discovered that 95% of 153 senior leaders surveyed “are getting zero return.”
To grasp how CIOs resolve when to cease an AI venture, we requested two IT leaders: What’s the particular purple flag that tells you an AI pilot has turn out to be a sunk price and must be killed? Each recognized clear, telltale indicators {that a} venture is off monitor.
-
Soo-Jin Behrstock, chief info expertise officer at Nice Day Enhancements, a direct-to-consumer residence reworking firm, stated missed milestones are a warning signal to pivot — and that cautious upfront planning makes killing an AI venture virtually nonexistent for her.
-
Ed Clark, CIO of California State College, which serves almost 500,000 college students, stated stalled progress and weak adoption are clear indicators {that a} venture is foundering — and that it is essential to look at for these indicators so leaders can redeploy sources to extra promising efforts.
Beneath are Behrstock and Clark’s responses to our query, edited for readability and size.
Soo-Jin Behrstock, chief info expertise officer, Nice Day Enhancements
Behrstock: ‘Begin with: What does success seem like?’
“After we tackle AI initiatives, I at all times begin with: What does success seem like, and the way are we going to measure it?
“For instance, if we’re utilizing AI for gross sales or advertising and marketing predictions, we begin with a small pattern of knowledge that we all know rather well. Based mostly on that, we’ve got a great sense of what the output ought to seem like. If the output is just not directionally proper, then that often tells us one thing is off — it could possibly be the information, the method or the mannequin.
“From there, we set brief milestones, often each couple of weeks, to see if we’re getting nearer to the result we outlined with measurable outcomes.
“If we’re not [getting closer to the outcome], then we pivot or defer. I don’t consider in pushing AI ahead only for the sake of claiming we’re doing AI. If success is just not clearly outlined or we can not measure progress in opposition to it, that may be a purple flag.
“I do not find out about killing [an AI initiative] until you identify it isn’t aligned with the enterprise.”
‘Pivot to get to success’
“One factor that I do discover is usually some builders get into evaluation paralysis when it comes to how the AI ought to work. That extends the timeline and the price range. However when you may have the incremental milestones that are not being met, you want to ask what wants to vary to get to success?
“Let me give an instance: Proper now, we’re engaged on utilizing AI predictive modeling. We’re taking a small pattern of knowledge that we’re actually acquainted with, and we’re measuring what the output is, so we are able to say, ‘Here is what good actually seems like. This works.’ Then we’re including extra knowledge into it, so we are able to measure whether or not our mannequin is working accurately or whether or not we have to pivot.
“In such instances [where we need to pivot], it could possibly be that we do not have the correct sources or abilities, so we might have to companion with consulting firms to assist us.”
The worth of being ‘very intentional’
“I have never needed to be ready to say, ‘Let’s kill it.’ However I may see doing that if what we thought would make sense for the enterprise, we later decide does not. However I have never been in that scenario but as a result of all the pieces’s been very intentional. I am actually cautious to set expectations upfront and outline success. And so if we’re not hitting milestones, it is often [because of issues] round knowledge and course of, so we decide the place we have to modify and we simply pivot.”
Ed Clark, CIO, California State College System
Clark: A listing of purple flags
“In my thoughts, that purple flag is when the pilot not has a transparent path to create strategic worth to your group.
“One other purple flag is when the workforce will get caught in a loop, after they come again with the identical standing updates and also you’re seeing no progress, once you see the identical slides, the identical hurdles, once you hear, ‘We’re virtually there’ and nothing is going on, and there aren’t any deliverables. Then this factor is caught.
“One other factor to search for is when adoption is weak, once you’ve rolled out one thing that everybody stated, ‘Oh, that is going to be so cool,’ however then nobody makes use of it.
“Additionally, if the manager sponsorship disappears, that is one other factor I search for.
“And one other sign that is actually necessary — and this occurs on a regular basis — is when distributors are making a core functionality for his or her platform [that’s similar to the AI project you’re developing]. We’re not within the enterprise of competing with these distributors.
“After which the very last thing — and this occurs particularly in synthetic intelligence — is when the unique use case that you simply’re all enthusiastic about is simply form of out of date as a result of the expertise strikes so quick.
“Any of these could possibly be purple flags.”
Discovering the explanations behind the purple flags
“You need to ask why some initiatives find yourself with purple flags.
It could possibly be what’s being requested for is just too out of the vary of what your workforce is ready to accomplish. Then it’s a must to determine whether or not [the AI project] is an concept adequate to pursue — the place I need to chase it down and perhaps herald exterior sources to get it performed, or whether or not it is a pilot the place it is OK to your workforce to only observe and study, or whether or not the manager who was enthusiastic about it however will not meet with us about it actually does not care about it anymore, so that you should not be pursuing it.
“All the cash and energy you are spending could possibly be going towards one thing else that might obtain the goals of the group.”
The pilot that didn’t take maintain
“I can inform you a selected instance: One of many issues that we’re continuously is affordability. And we thought we may make open textbooks — these free textbooks — extra accessible to college students by creating an AI overlay that [functions as] a tutor.
“So we tried to pilot this factor, however there was no adoption. It was irritating as a result of we noticed a strategy to make open textbooks extra helpful for our college students by including this assist system.
“It seems that college basically do not like open textbooks, as a result of they do not include the educating sources they need. And so although it was a beautiful concept that may assist serve our mission and advance our strategic objectives — and that executives initially thought can be nice — we needed to kill the thought.”
What the workforce realized from killing the venture
“It did damage to make that decision, as a result of I feel the quantity our college students [collectively] spend in textbooks yearly is within the a whole bunch of tens of millions of {dollars}. However we realized quite a bit engaged on that venture. Like, if we’re actually going to do that, we’d like to verify it is multilingual and that it will possibly deal with mathematical symbols. We realized issues which might be going to be helpful for our neighborhood that may be utilized elsewhere.”
