Wednesday, January 14, 2026

A Coding Information to Design and Orchestrate Superior ReAct-Based mostly Multi-Agent Workflows with AgentScope and OpenAI


On this tutorial, we construct a complicated multi-agent incident response system utilizing AgentScope. We orchestrate a number of ReAct brokers, every with a clearly outlined position reminiscent of routing, triage, evaluation, writing, and overview, and join them by structured routing and a shared message hub. By integrating OpenAI fashions, light-weight instrument calling, and a easy inside runbook, we show how advanced, real-world agentic workflows might be composed in pure Python with out heavy infrastructure or brittle glue code. Try the FULL CODES right here.

!pip -q set up "agentscope>=0.1.5" pydantic nest_asyncio


import os, json, re
from getpass import getpass
from typing import Literal
from pydantic import BaseModel, Subject
import nest_asyncio
nest_asyncio.apply()


from agentscope.agent import ReActAgent
from agentscope.message import Msg, TextBlock
from agentscope.mannequin import OpenAIChatModel
from agentscope.formatter import OpenAIChatFormatter
from agentscope.reminiscence import InMemoryMemory
from agentscope.instrument import Toolkit, ToolResponse, execute_python_code
from agentscope.pipeline import MsgHub, sequential_pipeline


if not os.environ.get("OPENAI_API_KEY"):
   os.environ["OPENAI_API_KEY"] = getpass("Enter OPENAI_API_KEY (hidden): ")


OPENAI_MODEL = os.environ.get("OPENAI_MODEL", "gpt-4o-mini")

We arrange the execution atmosphere and set up all required dependencies so the tutorial runs reliably on Google Colab. We securely load the OpenAI API key and initialize the core AgentScope elements that can be shared throughout all brokers. Try the FULL CODES right here.

RUNBOOK = [
   {"id": "P0", "title": "Severity Policy", "text": "P0 critical outage, P1 major degradation, P2 minor issue"},
   {"id": "IR1", "title": "Incident Triage Checklist", "text": "Assess blast radius, timeline, deployments, errors, mitigation"},
   {"id": "SEC7", "title": "Phishing Escalation", "text": "Disable account, reset sessions, block sender, preserve evidence"},
]


def _score(q, d):
   q = set(re.findall(r"[a-z0-9]+", q.decrease()))
   d = re.findall(r"[a-z0-9]+", d.decrease())
   return sum(1 for w in d if w in q) / max(1, len(d))


async def search_runbook(question: str, top_k: int = 2) -> ToolResponse:
   ranked = sorted(RUNBOOK, key=lambda r: _score(question, r["title"] + r["text"]), reverse=True)[: max(1, int(top_k))]
   textual content = "nn".be part of(f"[{r['id']}] {r['title']}n{r['text']}" for r in ranked)
   return ToolResponse(content material=[TextBlock(type="text", text=text)])


toolkit = Toolkit()
toolkit.register_tool_function(search_runbook)
toolkit.register_tool_function(execute_python_code)

We outline a light-weight inside runbook and implement a easy relevance-based search instrument over it. We register this operate together with a Python execution instrument, enabling brokers to retrieve coverage information or compute outcomes dynamically. It demonstrates how we increase brokers with exterior capabilities past pure language reasoning. Try the FULL CODES right here.

def make_model():
   return OpenAIChatModel(
       model_name=OPENAI_MODEL,
       api_key=os.environ["OPENAI_API_KEY"],
       generate_kwargs={"temperature": 0.2},
   )


class Route(BaseModel):
   lane: Literal["triage", "analysis", "report", "unknown"] = Subject(...)
   aim: str = Subject(...)


router = ReActAgent(
   identify="Router",
   sys_prompt="Route the request to triage, evaluation, or report and output structured JSON solely.",
   mannequin=make_model(),
   formatter=OpenAIChatFormatter(),
   reminiscence=InMemoryMemory(),
)


triager = ReActAgent(
   identify="Triager",
   sys_prompt="Classify severity and fast actions utilizing runbook search when helpful.",
   mannequin=make_model(),
   formatter=OpenAIChatFormatter(),
   reminiscence=InMemoryMemory(),
   toolkit=toolkit,
)


analyst = ReActAgent(
   identify="Analyst",
   sys_prompt="Analyze logs and compute summaries utilizing python instrument when useful.",
   mannequin=make_model(),
   formatter=OpenAIChatFormatter(),
   reminiscence=InMemoryMemory(),
   toolkit=toolkit,
)


author = ReActAgent(
   identify="Author",
   sys_prompt="Write a concise incident report with clear construction.",
   mannequin=make_model(),
   formatter=OpenAIChatFormatter(),
   reminiscence=InMemoryMemory(),
)


reviewer = ReActAgent(
   identify="Reviewer",
   sys_prompt="Critique and enhance the report with concrete fixes.",
   mannequin=make_model(),
   formatter=OpenAIChatFormatter(),
   reminiscence=InMemoryMemory(),
)

We assemble a number of specialised ReAct brokers and a structured router that decides how every consumer request ought to be dealt with. We assign clear duties to the triage, evaluation, writing, and overview brokers, making certain separation of considerations. Try the FULL CODES right here.

LOGS = """timestamp,service,standing,latency_ms,error
2025-12-18T12:00:00Z,checkout,200,180,false
2025-12-18T12:00:05Z,checkout,500,900,true
2025-12-18T12:00:10Z,auth,200,120,false
2025-12-18T12:00:12Z,checkout,502,1100,true
2025-12-18T12:00:20Z,search,200,140,false
2025-12-18T12:00:25Z,checkout,500,950,true
"""


def msg_text(m: Msg) -> str:
   blocks = m.get_content_blocks("textual content")
   if blocks is None:
       return ""
   if isinstance(blocks, str):
       return blocks
   if isinstance(blocks, listing):
       return "n".be part of(str(x) for x in blocks)
   return str(blocks)

We introduce pattern log knowledge and a utility operate that normalizes agent outputs into clear textual content. We make sure that downstream brokers can safely devour and refine earlier responses with out format points. It focuses on making inter-agent communication strong and predictable. Try the FULL CODES right here.

async def run_demo(user_request: str):
   route_msg = await router(Msg("consumer", user_request, "consumer"), structured_model=Route)
   lane = (route_msg.metadata or {}).get("lane", "unknown")


   if lane == "triage":
       first = await triager(Msg("consumer", user_request, "consumer"))
   elif lane == "evaluation":
       first = await analyst(Msg("consumer", user_request + "nnLogs:n" + LOGS, "consumer"))
   elif lane == "report":
       draft = await author(Msg("consumer", user_request, "consumer"))
       first = await reviewer(Msg("consumer", "Overview and enhance:nn" + msg_text(draft), "consumer"))
   else:
       first = Msg("system", "Couldn't route request.", "system")


   async with MsgHub(
       contributors=[triager, analyst, writer, reviewer],
       announcement=Msg("Host", "Refine the ultimate reply collaboratively.", "assistant"),
   ):
       await sequential_pipeline([triager, analyst, writer, reviewer])


   return {"route": route_msg.metadata, "initial_output": msg_text(first)}


end result = await run_demo(
   "We see repeated 5xx errors in checkout. Classify severity, analyze logs, and produce an incident report."
)
print(json.dumps(end result, indent=2))

We orchestrate the complete workflow by routing the request, executing the suitable agent, and working a collaborative refinement loop utilizing a message hub. We coordinate a number of brokers in sequence to enhance the ultimate output earlier than returning it to the consumer. It brings collectively all earlier elements right into a cohesive, end-to-end agentic pipeline.

In conclusion, we confirmed how AgentScope permits us to design strong, modular, and collaborative agent methods that transcend single-prompt interactions. We routed duties dynamically, invoked instruments solely when wanted, and refined outputs by multi-agent coordination, all inside a clear and reproducible Colab setup. This sample illustrates how we are able to scale from easy agent experiments to production-style reasoning pipelines whereas sustaining readability, management, and extensibility in our agentic AI functions.


Try the FULL CODES right here. Additionally, be happy to observe us on Twitter and don’t neglect to affix our 100k+ ML SubReddit and Subscribe to our E-newsletter. Wait! are you on telegram? now you may be part of us on telegram as properly.


Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is dedicated to harnessing the potential of Synthetic Intelligence for social good. His most up-to-date endeavor is the launch of an Synthetic Intelligence Media Platform, Marktechpost, which stands out for its in-depth protection of machine studying and deep studying information that’s each technically sound and simply comprehensible by a large viewers. The platform boasts of over 2 million month-to-month views, illustrating its reputation amongst audiences.

Related Articles

Latest Articles