Tuesday, January 13, 2026

What CISOs want from AI in a brand new 12 months of cyberthreats


As we enter one other 12 months outlined by the adoption of AI, CISOs face extra cyberthreats and elevated demand to defend their organizations. InformationWeek spoke with 5 CISOs to get a way of what they anticipate from the know-how in 2026: how will probably be used within the palms of risk actors, its capabilities as a defensive instrument, and what safety leaders need and want from the know-how because it turns into more and more ingrained within the cloth of their tech stacks.

The risk panorama

In 2025, risk actors used AI to hone their campaigns and develop the scale of assaults. Phishing assaults acquired more durable to identify; AI simply removes the outdated inform of poor grammar. And AI makes it simpler to solid hyperpersonalized lures for extra victims.

“Proper now, we’re seeing about 90% of social engineering phishing kits have AI deepfake know-how obtainable in them,” stated Roger Grimes, CISO advisor at safety consciousness coaching firm KnowBe4.

To this point, AI has sharpened outdated techniques. That pattern is simply going to ramp up as time goes on. Wendi Whitmore, chief safety intelligence officer at cybersecurity firm Palo Alto Networks, described a lot of the assaults fueled by AI as “evolutionary and never revolutionary,” however that would change as risk actors transfer by means of their very own studying curves. 

Associated:Outsmart danger: A 5-point plan to outlive an information breach

The cyberattack executed by suspected Chinese language state actors who manipulated Anthropic presaged the way forward for cyberattacks: large-scale and largely autonomous.

“The way forward for cybercrime is dangerous guys’ AI bots towards good guys’ AI bots, and the most effective algorithms will win,” Grimes stated. “That is the way forward for all cybersecurity from right here on out.”

As that future approaches, hackers will search for methods to make use of AI to execute assaults and seek for vulnerabilities within the AI methods and instruments that enterprises use.

“The factor that’s most regarding to a CISO is that the LLMs are going to be the honeypots. That is going to be the place that any hacker’s going to need to assault as a result of that is the place all the information’s at,” stated Jill Knesek, CISO at BlackLine, a cloud-based monetary operations administration platform.

Grimes additionally anticipated a rise in hacks of the mannequin context protocol  (MCP), Anthropic’s open supply customary that permits AI methods to speak with exterior methods. Risk actors can leverage strategies resembling immediate injection to take advantage of MCP servers.

As extra widespread assaults perpetrated by AI happen, CISOs will grapple with questions round identification, in response to Whitmore. “I do not assume that the business collectively has a great understanding but of who’s accountable when it is really an artificial identification that has created an enormous, widespread assault,” she stated. That duty may relaxation on the enterprise unit that deployed it, the CISO who accepted using the instruments or the precise group that is leveraging it.

Associated:It is time to revamp IT safety to take care of AI

AI as a cyberdefense instrument

As risk actors beef up their AI capabilities, so should defenders. In 2025, CISOs came upon simply what AI can do for his or her cybersecurity methods. AI’s capacity to sift by means of mountains of information and discern patterns proved to be one in every of its foremost boons for cybersecurity groups.

“Inside for my group, that has been a sport changer as a result of we’re seeing that now my risk analysts can take 10 minutes to analysis one thing as an alternative of an hour going to separate instruments,” stated Don Pecha, CISO at FNTS, a managed cloud and mainframe providers firm for regulated industries. 

AI can discover the proverbial needle within the haystack of threats, each actual and false positives, and allow analysts to make sooner choices. It could possibly automate a lot of the digging and assessment that beforehand represented loads of tedious, handbook work for analysts.

Whereas cybersecurity groups seize these advantages, AI as a cyberdefense instrument has loads of room to develop. “We’re not seeing … actually purpose-built AI safety for probably the most half. What you are seeing is legacy safety with some AI functionality and performance,” Knesek stated.

Associated:Anthropic thwarts cyberattack on its Claude Code: This is why it issues to CIOs

As 2026 begins, extra AI safety options will emerge, significantly within the realm of agentic AI. Grimes stated he expects that patching bots might be one kind of AI agent granted extra autonomy inside organizations.

“You are not going to have the ability to battle AI that is attempting to compromise you with conventional software program. You are going to want a patching bot,” he stated.

The phrase “human within the loop” is held up by AI adopters because the gold customary of accountable use, however as agentic AI takes off, CISOs and their groups must grapple with questions on how a lot autonomy these brokers are granted. What occurs when there’s much less and fewer human involvement?

“I believe some individuals are going to say, ‘Oh, that is nice. I’ll imagine every little thing the seller stated. I’ll give it full autonomy,'” Grimes stated. That would result in operational interruption.

Moreover, as AI brokers turn into extra autonomous, they are going to be frequent targets of malicious actors. “So as to defend the human, you are going to have to guard the AI brokers that the human is utilizing,” Grimes stated.

The CISO’s AI want record 

For all of the fevered predictions round AI, uncertainties stay for the long run. CISOs should sustain with quickly altering know-how. As they press forward, what do they want and wish from the know-how?

  • Operational efficiencies. It is time for AI to ship. CISOs and CIOs need AI-driven instruments which have a measurable influence. “As we transfer ahead, they’ll anticipate to see increasingly more purpose-built capabilities [where] actually you may simply measure operational efficiencies,” Knesek stated.That might be true of AI throughout enterprise features.

  • Sooner safety opinions. Many enterprises could take a look at rolling out a number of new applied sciences in 2026, a frightening prospect from a safety perspective. A profitable answer for automating important safety opinions has but to emerge. “At a tactical stage, that is one thing that CISOs and CIOs actually need to determine,” Whitmore stated. “That is every little thing from the method piece of it to the precise know-how that is going to assist them speed up that.”

  • Belief. Clients will ask their distributors harder questions as a way to keep belief. Corporations are reluctant to drag the veil again on their AI fashions, lest they offer up aggressive benefit, however that usually leaves prospects with little greater than assurances fairly than concrete solutions.

    “I perceive there’s loads of IP concerned, the way in which that these fashions are skilled … nevertheless it’s very tough to onboard these and have a really fulsome understanding and assure that what has been introduced in our conversations, even privateness insurance policies, et cetera, is actually taking place behind the scenes,” stated Chris Henderson, CISO at cybersecurity firm Huntress.

  • Higher governance. AI governance goes to be prime of thoughts for CISOs within the new 12 months. “That is actually the place we’re struggling at the moment as a result of there’s solely actually two or three merchandise on the market that actually present governance in an enterprise to your AI,” Pecha stated.

    There are many frameworks obtainable for accountable AI use, however merely checking off gadgets on an inventory is not sufficient, he added.

    “You may’t simply go depend on a NIST framework that stated an AI needs to be doing these items. That guidelines is critical, however then how do you place in an operational instrument that validates what information the AI was skilled [on]?” Pecha requested. 

    CISOs will want methods to point out that AI instruments their groups constructed internally and instruments they bring about in from third events are safe and used responsibly, whether or not by means of ongoing audits or some form of exterior certification.

  • Extra risk modeling. Grimes stated he needs to see extra risk modeling of AI, significantly as using agentic AI ramps up. The place are the vulnerabilities? What has been finished to mitigate them? “The distributors that risk mannequin are going to have safer instruments and extra reliable instruments,” Grimes contended.

  • Extra nuance. AI methods excel at gathering info and simplifying choices for people, however they’ve but to achieve a spot the place they will stand in for human decision-making.

    “On the finish of the day, it is nonetheless not as correct as a human at figuring out ought to someone be woken up at 2 within the morning due to an occasion that has triggered?” Henderson stated. He added that he want to see AI instruments transfer past giving binary responses to providing extra nuanced solutions that embody how sure they’re of a call or suggestion.

  • Midmarket options. Pecha stated he hopes AI makes safety extra accessible for small- to medium-sized companies. These companies do not have the safety budgets that enormous enterprises do, however they continue to be a part of the availability chain. “The largest danger we’ve got at the moment is small, medium companies usually are not served by the safety neighborhood nicely. They do not have sources. They do not have data, however AI could possibly be the stopgap for that,” he stated.

Whereas the talk about AI and the way forward for jobs rages, there appears to be some expectation amongst CISOs that AI might be a instrument to enhance the capabilities of human cybersecurity groups fairly than a know-how to switch them fully.

“I believe they want extensions of their groups fairly than replacements of their groups,” stated Henderson. “When you take a look at AI as one thing that may allow your group to proceed to scale with out including further our bodies versus changing our bodies, it may be the trail to success.”



Related Articles

Latest Articles