Thursday, February 12, 2026

Constructing an AI Agent to Detect and Deal with Anomalies in Time-Collection Knowledge


As a knowledge scientist engaged on time-series forecasting, I’ve run into anomalies and outliers greater than I can depend. Throughout demand forecasting, finance, visitors, and gross sales knowledge, I maintain working into spikes and dips which are arduous to interpret.

Anomaly dealing with is normally a grey space, hardly ever black or white, however indicators of deeper points. Some anomalies are actual alerts like holidays, climate occasions, promotions, or viral moments; others are simply knowledge glitches, however each look the identical at first look. The sooner we detect anomalies in knowledge, the sooner motion could be taken to stop poor efficiency and injury.

We’re coping with vital time-series knowledge, and detecting anomalies is essential. In the event you take away a real occasion, a worthwhile sign knowledge level is eliminated, and in case you maintain a false alarm sign, the coaching knowledge accommodates noise.

Most ML-based detectors flag spikes primarily based on Z-scores, IQR thresholds, or different static strategies with none context. With latest developments in AI, now we have a greater choice to design an anomaly-handling agent that causes about every case. An agent that detects uncommon conduct, checks context, and decides whether or not to repair the info, maintain it as an actual sign, or flag it for evaluate.

On this article, we construct such an agent step-by-step that mixes easy statistical detection with an AI agent that acts as a primary line of protection for time-series knowledge, lowering handbook intervention whereas preserving the alerts that matter most. We are going to detect and deal with anomalies in COVID-19 knowledge by autonomous decision-making primarily based on the severity of the anomaly, utilizing:

  1. Stay epidemiological knowledge from the illness.sh API.
  2. Statistical anomaly detection.
  3. Severity classification.
  4. A GroqCloud-powered AI agent that takes autonomous selections whether or not to:
    • Repair the anomaly
    • Preserve the anomaly
    • Flag anomaly for human evaluate

That is agentic determination intelligence, not merely anomaly detection.

Determine 1: AI Agent Implementation for Anomaly Detection
Picture by writer.

Why is conventional anomaly detection alone not sufficient?

There are conventional ML strategies like isolation forests designed for anomaly detection, however they lack end-to-end determination orchestration. They’re unable to behave on them rapidly sufficient in manufacturing environments. We’re implementing an AI agent to fill this hole by turning uncooked anomaly scores into autonomous, end-to-end selections dynamically on stay knowledge.

Conventional Anomaly Detection

The normal anomaly detection follows the pipeline method as drawn under:

Picture by writer

Limitations of Conventional Anomaly Detection

  • Works on static guidelines and manually units thresholds.
  • It’s single-dimensional and handles easy knowledge.
  • No contextual reasoning.
  • Human-driven determination making.
  • Guide-driven motion.

Anomaly Detection and Dealing with with an AI Agent 

The AI Agent anomaly detection follows the pipeline method as drawn under:

Picture by writer

Why does this work higher in apply?

  • Works on real-time knowledge.
  • It’s multidimensional and may deal with complicated knowledge.
  • Works on contextual reasoning.
  • Adaptive & self-learning determination making.
  • Take autonomous motion.

Selecting a practical dataset for our instance

We’re utilizing real-world COVID-19 knowledge to detect anomalies, as it’s noisy, exhibits spikes, and the outcomes assist in the advance of public well being.

What do we would like the AI Agent to determine?

The objective is to repeatedly monitor COVID-19 knowledge, discover anomalies, outline their severity, and take autonomous selections and determine motion to be taken:

  • Flag anomaly for human evaluate
  • Repair the anomaly
  • Preserve the anomaly

Knowledge Supply

For the info, we’re utilizing free, stay illness.sh knowledge through API. This API gives knowledge on every day confirmed instances, deaths and recoveries. For the AI Agent implementation, we’re specializing in every day case counts, which are perfect for anomaly detection.

Knowledge license: This tutorial makes use of COVID-19 historic case counts retrieved through the illness.sh API. The underlying dataset (JHU CSSE COVID-19 Knowledge Repository) is licensed underneath CC BY 4.0, which allows business use with attribution. (Accessed on January 22, 2026)

How do the items match collectively?

Excessive-Stage system structure of the anomaly detection on COVID-19 knowledge utilizing an AI Agent is as follows:

Determine 2: AI agent sits between anomaly detection and downstream motion, deciding whether or not to repair, maintain, or escalate anomalies
Picture by writer

Constructing the AI Agent Step-by-Step 

Let’s go step-by-step to grasp how one can load knowledge utilizing illness.sh, detect anomalies, classify them, and implement an AI agent that causes and takes acceptable motion as per the severity of the anomalies.

Step 1: Set up Required Libraries

Step one is to put in required libraries like phidata, groq, python-dotenv, tabulate, and streamlit.

pip set up phidata
pip set up groq
pip set up python-dotenv #library to load .env file
pip set up tabulate
pip set up streamlit

Step 2: Atmosphere File Set-up

Open your IDE and create a venture folder, and underneath that folder, create an environmental file “.env” to retailer GROQ_API_KEY.

GROQ_API_KEY="your_groq_api_key_here"

Step 3: Knowledge Ingestion

Earlier than constructing any agent, we want a knowledge supply that’s noisy sufficient to floor actual anomalies, however structured sufficient to purpose about. COVID-19 every day case counts are match as they comprise reporting delays, sudden spikes, and regime adjustments. For simplicity, we intentionally limit ourselves to a single univariate time collection.

Load knowledge from the illness.sh utilizing request URL and extract the date and every day case depend primarily based on the chosen nation and the variety of days for which you wish to extract knowledge. The info is transformed right into a structured dataframe by parsing json, formatting date and sorting chronologically.

# ---------------------------------------
# DATA INGESTION (illness.sh)
# ---------------------------------------

def load_live_covid_data(nation: str , days:int):
    url = f"https://illness.sh/v3/covid-19/historic/{nation}?lastdays={days}"
    response = requests.get(url)
    knowledge = response.json()["timeline"]["cases"]

    df = (
        pd.DataFrame(checklist(knowledge.gadgets()), columns=["Date", "Cases"])
        .assign(Date=lambda d: pd.to_datetime(d["Date"], format="%m/%d/%y"))
        .sort_values("Date")
        .reset_index(drop=True)
    )
    return df

Step 4: Anomalies Detection

We are going to now detect irregular conduct in COVID-19 time-series knowledge by detecting sudden spikes and fast development developments. Case counts are usually steady, and enormous deviations or sharp will increase point out significant anomalies. We are going to now detect anomalies utilizing statistical strategies and binary labeling for deterministic and reproducible anomaly detection. Two parameters are calculated to detect anomalies.

  1. Spike Detection
    • A sudden spike in knowledge is detected utilizing the Z-score; if any knowledge level falls exterior the Z-score vary, it should be an anomaly.
  2. Development Fee Detection
    • The day-over-day development price is calculated; if it exceeds 40%, it’s flagged.
# ---------------------------------------
# ANOMALY DETECTION
# ---------------------------------------
def detect_anomalies(df):
   values = df["Cases"].values
   imply, std = values.imply(), values.std()

   spike_idx = [
       i for i, v in enumerate(values)
       if abs(v - mean) > 3 * std
   ]

   development = np.diff(values) / np.most(values[:-1], 1)
   growth_idx = [i + 1 for i, g in enumerate(growth) if g > 0.4]

   anomalies = set(spike_idx + growth_idx)
   df["Anomaly"] = ["YES" if i in anomalies else "NO" for i in range(len(df))]

   return df

If there may be an anomaly based on both spike or development or with each parameters, the “Anomaly” is ready to “YES”; in any other case set to “NO”.

Step 5: Severity Classification

All anomalies aren’t equal; we’ll classify them as ‘CRITICAL’, ‘WARNING’, or ‘MINOR’ to information AI Agent selections. Fastened rolling home windows and rule-based thresholds are used to categorise severity. Severity is assessed solely when an anomaly exists; in any other case, Severity, Agent Determination, and Motion parameters within the dataframe are set to ‘clean’.

# ---------------------------------------
# CONFIG
# ---------------------------------------
ROLLING_WINDOW = 7
MIN_ABS_INCREASE = 500

# ---------------------------------------
# SEVERITY CLASSIFICATION
# ---------------------------------------
def compute_severity(df):
    df = df.sort_values("Date").reset_index(drop=True)
    df["Severity"] = ""
    df["Agent Decision"] = ""
    df["Action"] = ""
    for i in vary(len(df)):
        if df.loc[i, "Anomaly"] == "YES":
            if i < ROLLING_WINDOW:
                df.loc[i, "Severity"] = ""

            curr = df.loc[i, "Cases"]
            baseline = df.loc[i - ROLLING_WINDOW:i- 1, "Cases"].imply()

            abs_inc = curr - baseline
            development = abs_inc / max(baseline, 1)

            if abs_inc < MIN_ABS_INCREASE:
                df.loc[i, "Severity"] = ""
            if development >= 1.0:
                df.loc[i, "Severity"] = "CRITICAL"
            elif development >= 0.4:
                df.loc[i, "Severity"] = "WARNING"
            else:
                df.loc[i, "Severity"] = "MINOR"
    return df

Within the above code, to categorise the anomaly severity, every anomaly is in contrast with 7-day historic knowledge (ROLLING_WINDOW = 7), and absolute and relative development are calculated.

  1. Absolute Development

A MIN_ABS_INCREASE = 500 is outlined as a config parameter the place adjustments under this worth are thought of very small, a negligible change. If absolutely the development is lower than MIN_ABS_INCREASE, then ignore it and maintain the severity clean. Absolute development detects significant real-world affect, doesn’t react to noise or minor fluctuations, and prevents false alarms when development proportion is excessive.

  1. Relative Development:

Relative development helps in detecting explosive developments. If development is larger than or equal to 100% enhance over baseline, it means a sudden outbreak, and it’s assigned as ‘CRITICAL’; if development is larger than 40%, it means sustained acceleration and desires monitoring, and it’s assigned as ‘WARNING’; in any other case assigned as ‘MINOR’. 

After severity classification, it’s prepared for the AI Agent to make an autonomous determination and motion.

Step 6: Construct Immediate for AI Agent

Beneath is the immediate that defines how the AI agent causes and makes selections primarily based on structured context and predefined severity when an anomaly is detected.  The agent is restricted to 3 specific actions and should return a single, deterministic response for protected automation.

def build_agent_prompt(obs):
    return f"""
You might be an AI monitoring agent for COVID-19 knowledge.

Noticed anomaly:
Date: {obs['date']}
Circumstances: {obs['cases']}
Severity: {obs['severity']}

Determination guidelines:
- FIX_ANOMALY: noise, reporting fluctuation
- KEEP_ANOMALY: actual outbreak sign
- FLAG_FOR_REVIEW: extreme or ambiguous anomaly

Reply with ONLY one in every of:
FIX_ANOMALY
KEEP_ANOMALY
FLAG_FOR_REVIEW
"""

Three knowledge factors, i.e., date, variety of instances reported, and severity, are offered to the immediate explicitly, which helps the AI Agent to decide autonomously.

Step 7: Create your Agent with GroqCloud

We at the moment are creating an autonomous AI agent utilizing GroqCloud that makes clever contextual selections on detected anomalies and their severities and takes acceptable actions. Three predefined actions for the AI Agent implement validated outputs solely.

# ---------------------------------------
# BUILDING AI AGENT
# ---------------------------------------
agent = Agent(
    identify="CovidAnomalyAgent",
    mannequin=Groq(id="openai/gpt-oss-120b"),
    directions="""
You might be an AI agent monitoring stay COVID-19 time-series knowledge.
Detect anomalies, determine based on the anomaly:
"FIX_ANOMALY", "KEEP_ANOMALY", "FLAG_FOR_REVIEW"."""
)
for i in vary(len(df)):
    if df.loc[i, "Anomaly"] == "YES":
        obs = build_observation(df, i)
        immediate = build_agent_prompt(obs)
        response = agent.run(immediate)

        determination = response.messages[-1].content material.strip()
        determination = determination if determination in VALID_ACTIONS else "FLAG_FOR_REVIEW"
        df = agent_action(df, i, determination)

An AI agent named “CovidAnomalyAgent” is created, which makes use of an LLM mannequin hosted by GroqCloud for quick and low-latency reasoning. AI Agent runs a well-defined immediate, observes knowledge, contextual reasoning, makes an autonomous determination, and takes actions inside protected constraints.

An AI Agent isn’t dealing with anomalies however making clever selections for every detected anomaly. The agent’s determination precisely displays anomaly severity and required motion.

# ---------------------------------------
# Agent ACTION DECIDER
# ---------------------------------------
def agent_action(df, idx,motion):
    df.loc[idx, "Agent Decision"] = motion

    if motion == "FIX_ANOMALY":
        fix_anomaly(df, idx)

    elif motion == "KEEP_ANOMALY":
        df.loc[idx, "Action"] = "Accepted as an actual outbreak sign"

    elif motion == "FLAG_FOR_REVIEW":
        df.loc[idx, "Action"] = "Flagged for human evaluate"
    return df

AI Agent ignores regular knowledge factors with no anomaly and considers solely knowledge factors with “ANOMALY= YES”. The AI agent is constrained to return solely three legitimate selections: “FIX_ANOMALY“, “KEEP_ANOMALY“, and “FLAG_FOR_REVIEW“, and accordingly, motion is taken as outlined within the desk under:

Agent Determination Motion
FIX_ANOMALY Auto-corrected by an AI agent
KEEP_ANOMALY Accepted as an actual outbreak sign
FLAG_FOR_REVIEW Flagged for human evaluate

For minor anomalies, the AI agent mechanically fixes the info, preserves legitimate anomalies as-is, and flags vital instances for human evaluate.

Step 8: Repair Anomaly

Minor anomalies are attributable to reporting noise and are corrected utilizing native rolling imply smoothing over latest historic values.

# ---------------------------------------
# FIX ANOMALY
# ---------------------------------------

def fix_anomaly(df, idx):
    window = df.loc[max(0, idx - 3):idx - 1, "Cases"]
    if len(window) > 0:
        df.loc[idx, "Cases"] = int(window.imply())

    df.loc[idx, "Severity"] = ""
    df.loc[idx, "Action"] = "Auto-corrected by an AI agent"

It takes the instant 3 days of previous knowledge, calculates its imply, and smooths the anomaly by changing its worth with this common. By the native rolling imply smoothing method, momentary spikes and knowledge glitches could be dealt with. 

As soon as an anomaly is fastened, the info level is not thought of dangerous, and severity is deliberately eliminated to keep away from confusion. “Motion” is up to date to “Auto-corrected by an AI agent”.

Full Code

Kindly undergo the whole code for the statistical anomaly detection and AI Agent implementation for anomaly dealing with.

https://github.com/rautmadhura4/anomaly_detection_agent/tree/foremost

Outcomes

Let’s evaluate the outcomes for the nation, “India,” with various kinds of severity detected and the way the AI Agent handles them.

State of affairs 1: A Native Implementation

The primary try is a local implementation the place we detect minor anomalies and the AI Agent fixes them mechanically. Beneath is the snapshot of the COVID knowledge desk of India with severity.

Picture by writer

We’ve got additionally carried out a Streamlit dashboard to evaluate the AI Agent’s selections and actions. Within the under outcome snapshot, you’ll be able to see that varied minor anomalies are fastened by the AI Agent.

Picture by writer

This works finest when anomalies are localized noise relatively than regime adjustments.

State of affairs 2: A Boundary Situation

Right here, vital anomalies are detected, and the AI Agent raises a flag for evaluate as proven within the snapshot of the COVID knowledge desk of India with severity.

Picture by writer

On the Streamlit dashboard AI Agent’s selections and actions are proven within the outcome snapshot. You’ll be able to see that each one the vital anomalies had been flagged for human evaluate by the AI Agent.

Picture by writer

Severity gating prevents harmful auto-corrections in high-impact anomalies.

State of affairs 3: A Limitation 

For the limitation state of affairs, warning and significant anomalies are detected as proven within the snapshot of the COVID knowledge desk of India with severity.

Picture by writer

On the Streamlit dashboard AI Agent’s selections and actions are proven under within the outcome snapshot. You’ll be able to see that the vital anomaly is flagged for human evaluate by AI Agent, however the WARNING anomaly is mechanically fastened. In lots of actual settings, a WARNING-level anomaly must be preserved and monitored relatively than corrected.

Picture by writer

This failure highlights why WARNING thresholds must be tuned and why human evaluate stays important.

Use the whole code and take a look at anomaly detection for the COVID-19 dataset, with totally different parameters.

Future Scope and Enhancements

We’ve got used a really restricted dataset and carried out rule-based anomaly detection, however sooner or later, some enhancements could be performed within the AI Agent implementation:

  1. In our implementation, an anomaly is detected, and a call is made primarily based on case depend solely. Sooner or later, knowledge could be extra elaborate with options like hospitalization data, vaccination knowledge, and others.
  1. Anomaly detection is completed right here utilizing statistical strategies, which can be ML-driven sooner or later to determine extra complicated patterns.
  1. Now, now we have carried out a single-agent structure; sooner or later multi-agent structure could be carried out to enhance scalability, readability, and resilience.
  2. Sooner or later human suggestions loop must also take care to make improved selections.

Ultimate Takeaways

Smarter AI brokers allow operational AI that makes selections utilizing contextual reasoning, takes motion to repair anomalies, and escalates to people when wanted. There are some sensible takeaways to bear in mind whereas constructing an AI Agent for anomaly detection:

  • To detect anomalies, use statistical strategies and implement AI brokers for contextual decision-making.
  • Minor anomalies are protected to be autocorrected as they’re usually reported as noise. Vital ought to by no means be autocorrected and flagged for evaluate by area consultants in order that real-world alerts don’t get suppressed.
  • This AI agent should not be utilized in conditions the place anomalies straight set off irreversible actions.

When statistical strategies and an AI agent method are mixed correctly, they rework anomaly detection from simply an alerting system right into a managed, decision-driven system with out compromising security.

Related Articles

Latest Articles