AI Brokers are being extensively adopted throughout industries, however what number of brokers are wanted for an Agentic AI system? The reply might be 1 or extra. What actually issues is that we decide the proper variety of Brokers for the duty at hand. Right here, we are going to attempt to have a look at the circumstances the place we will deploy Single-Agent techniques and Multi-Agent techniques, and weigh the positives and negatives. This weblog assumes you have already got a fundamental understanding of AI brokers and are acquainted with the langgraph agentic framework. With none additional ado, let’s dive in.
Single-Agent vs Multi-Agent
If we’re utilizing a superb LLM underneath the hood for the Agent, then a Single-Agent Agentic system is sweet sufficient for a lot of duties, supplied an in depth step-by-step immediate and all the mandatory instruments are current.
Notice: A Single-Agent system has one agent, however it could possibly have any variety of instruments. Additionally, having a single agent doesn’t imply there shall be just one LLM name. There might be a number of calls.
And we use a Multi-Agent Agentic when now we have a fancy process at hand, for example, circumstances the place just a few steps can confuse the system and end in hallucinated solutions. The concept right here is to have a number of brokers the place every agent performs solely a single process. We orchestrate the brokers in a sequential or hierarchical method and use the responses of every agent to supply the ultimate output.
One would possibly ask, why not use Multi-Agent techniques for all use circumstances? The reply is prices; it’s necessary to maintain the prices underneath examine by choosing solely the required variety of brokers and utilizing the proper mannequin. Now let’s check out use circumstances and examples of each Single-Agent and Multi-Agent agentic techniques within the following techniques.
Overview of Single-Agent vs Multi-Agent System
| Facet | Single-Agent System | Multi-Agent System |
|---|---|---|
| Variety of Brokers | One agent | A number of specialised brokers |
| Structure Complexity | Easy and straightforward to handle | Complicated, requires orchestration |
| Activity Suitability | Easy to reasonably complicated duties | Complicated, multi-step duties |
| Immediate Design | Extremely detailed prompts required | Less complicated prompts per agent |
| Software Utilization | Single agent makes use of a number of instruments | Every agent can have devoted instruments |
| Latency | Low | Larger resulting from coordination |
| Price | Decrease | Larger |
| Error Dealing with | Restricted for complicated reasoning | Higher through agent specialization |
| Scalability | Restricted | Extremely scalable and modular |
| Finest Use Circumstances | Code era, chatbots, summarization | Content material pipelines, enterprise automation |
Single-Agent Agentic System
Single-Agent techniques depend on solely a single AI Agent to hold out duties, usually by invoking instruments or APIs in a sequence. This easier structure is quicker and in addition simpler to handle. Let’s check out just a few purposes of Single-Agent workflows:
- Code Technology: An AI coding assistant can generate or refactor code utilizing a single agent. For instance, given an in depth description, a single agent (LLM together with a code execution instrument) can write the code and in addition run exams. Nevertheless, one-shot era can miss edge circumstances, which might be mounted through the use of few-shot prompting.
- Buyer Assist Chatbots: Assist Chatbots can use a single agent that retrieves data from a data base and solutions the consumer queries. A buyer Q&A bot can use one LLM that calls a instrument to fetch related data, then formulates the response. It’s easier than orchestrating a number of brokers, and sometimes ok for direct FAQs or duties like summarizing a doc or composing an electronic mail reply primarily based on supplied information. Additionally, the latency shall be a lot better when in comparison with a Multi-Agent system.
- Analysis Assistants: Single-Agent techniques can excel in guided analysis or writing duties, supplied the prompts are good. Let’s take an instance of an AI researcher agent. It could use instruments (internet search, and so on.) to assemble info after which summarize findings for the ultimate reply. So, I like to recommend a Single-Agent system for duties like analysis automation, the place one agent with dynamic instrument use can compile data right into a report.
Now, let’s stroll by means of a code-generation agent carried out utilizing LangGraph. Right here, we are going to implement a single agent that makes use of GPT-5-mini and provides it a code execution instrument as effectively.
Pre-requirements
If you wish to run it as effectively, guarantee that you’ve your OpenAI key, and you need to use Google Colab or Jupyter Pocket book. Simply make sure you’re passing the API key within the code.
Python Code
Installations
!pip set up langchain langchain_openai langchain_experimental
Imports
from langchain.brokers import create_agent
from langchain_openai import ChatOpenAI
from langchain.instruments import instrument
from langchain.messages import HumanMessage
from langchain_experimental.instruments.python.instrument import PythonREPLTool
Defining the instrument, mannequin, and agent
# Outline the instrument
@instrument
def run_code(code: str) -> str:
'''Execute python code and return output or error'''
return repl.invoke(code)
# Create mannequin and agent
mannequin = ChatOpenAI(mannequin="gpt-5-mini")
agent = create_agent(
mannequin=mannequin,
instruments=[run_code],
system_prompt="You're a useful coding assistant that makes use of the run_code instrument. If it fails, repair it and check out once more (max 3 makes an attempt)."
)
Operating the agent
# Invoking the agent
consequence = agent.invoke({
"messages": [
HumanMessage(
content="""Write python code to calculate fibonacci of 10.
- Return ONLY the final working code
"""
)
]
})
# Displaying the output
print(consequence["messages"][-1].content material)
Output:

We acquired the response. The agent reflection helps examine if there’s an error and tries fixing it by itself. Additionally, the immediate might be personalized for the naming conventions within the code and the detailing of the feedback. We will additionally move the check circumstances as effectively together with our immediate.
Notice: create_agent is the advisable approach within the present LangChain model. Additionally price mentioning is that it makes use of the LangGraph runtime and runs a ReAct-style loop by default.
Multi-Agent Agentic System
In distinction to Single-Agent techniques, Multi-Agent techniques, as mentioned, could have a number of unbiased AI brokers, every with its personal function, immediate, and perhaps every with a distinct mannequin, working collectively in a coordinated method. In a multi-agent workflow, every agent makes a speciality of a subtask; for instance, one agent would possibly give attention to writing, and the opposite does fact-checking. These brokers move data through a shared state. Listed here are some circumstances the place we will use the Mult-Agent techniques:
- Content material Creation: We will make a Multi-Agent system for this goal, for example, if we’re making a system to craft Information Articles: It’ll have a Search Agent to fetch the newest data from the online, a Curator Agent that may filter the findings by relevance, and a Author Agent to draft the articles. Then, a Suggestions Agent opinions every draft, offering suggestions, and the author can then revise till the article passes high quality checks. Extra brokers might be added or eliminated in keeping with the necessity in content material creation.
- Buyer Assist and Service Automation: Multi-Agent architectures can be utilized to construct extra sturdy help bots. For instance, let’s say we’re constructing an insurance coverage help system. If a consumer asks about billing, the question is routinely handed to the “Billing Agent,” or if it’s about claims, it is going to be routed to the “Claims Agent.” Equally, they will have many extra brokers on this workflow. The workflow can contain passing prompts to a number of brokers directly if there’s a want for faster responses.
- Software program Improvement: Multi-Agent techniques can help with complicated programming workflows that may transcend a single code era or refactoring process. Let’s take an instance the place now we have to make a whole pipeline from creating check circumstances to writing code and operating the check circumstances. We will have three brokers for this: ‘Check Case Technology Agent’, ‘Code Technology Agent’, and ‘Tester Agent’. The Tester Agent can delegate the duty once more to the ‘Code Technology Agent’ if the exams fail.
- Enterprise Workflows & Automation: Multi-Agent techniques can be utilized in enterprise workflows that contain a number of steps and resolution factors. One instance is safety incident response, the place we would wish a Search Agent that scans the logs and menace intel, an Analyzer Agent that opinions the proof and hypotheses concerning the incident, and a Reflection Agent that evaluates the draft report for high quality or gaps. They work in concord to generate the ultimate response for this use case.
Now let’s stroll by means of the code of the Information Article Creator utilizing the Multi-Brokers, that is to get a greater concept of agent orchestration and the workflow creation. Right here additionally, we might be utilizing LangGraph, and I’ll be taking the assistance of Tavily API for internet search.

Pre-Requisites
- You’ll want an OpenAI API Key
- Enroll and create your new Tavily API Key in the event you already don’t have one: https://app.tavily.com/dwelling
- In case you are utilizing Google Colab, I might advocate you add the keys to the secrets and techniques as ‘OPENAI_API_KEY’ and ‘TAVILY_API_KEY’ and provides entry to the pocket book, or you may straight move the API key within the code.

Python Code
Installations
!pip set up -U langgraph langchain langchain-openai langchain-community tavily-python
Imports
from typing import TypedDict, Listing
from langgraph.graph import StateGraph, END
from langchain_openai import ChatOpenAI
from langchain_community.instruments.tavily_search import TavilySearchResults
from langchain.messages import HumanMessage
from google.colab import userdata
import os
Loading the API keys into the setting
os.environ["OPENAI_API_KEY"] = userdata.get('OPENAI_API_KEY')
os.environ["TAVILY_API_KEY"] = userdata.get('TAVILY_API_KEY')
Initialize the instrument and the mannequin
llm = ChatOpenAI(
mannequin="gpt-4.1-mini"
)
search_tool = TavilySearchResults(max_results=5)
Outline the state
class ArticleState(TypedDict):
subject: str
search_results: Listing[str]
curated_notes: str
article: str
suggestions: str
accepted: bool
This is a crucial step and helps retailer the intermediate outcomes of the brokers, which might later be accessed and modified by different brokers.
Agent Nodes
Search Agent (Has entry to the search instrument):
def search_agent(state: ArticleState):
question = f"Newest information about {state['topic']}"
outcomes = search_tool.run(question)
return {
"search_results": outcomes
}
Curator Agent (Processes the knowledge obtained from the search agent):
def curator_agent(state: ArticleState):
immediate = f"""
You're a curator.
Filter and summarize probably the most related data
from the next search outcomes:
{state['search_results']}
"""
response = llm.invoke([HumanMessage(content=prompt)])
return {
"curated_notes": response.content material
}
Author Agent (Drafts a model of the Information Article):
def writer_agent(state: ArticleState):
immediate = f"""
Write a transparent, participating information article primarily based on the notes beneath.
Notes:
{state['curated_notes']}
Earlier draft (if any):
{state.get('article', '')}
"""
response = llm.invoke([HumanMessage(content=prompt)])
return {
"article": response.content material
}
Suggestions Agent (Writes suggestions for the preliminary model of the article):
def feedback_agent(state: ArticleState):
immediate = f"""
Evaluate the article beneath.
Test for:
- factual readability
- coherence
- readability
- journalistic tone
If the article is sweet, reply with:
APPROVED
In any other case, present concise suggestions.
Article:
{state['article']}
"""
response = llm.invoke([HumanMessage(content=prompt)])
accepted = "APPROVED" in response.content material.higher()
return {
"suggestions": response.content material,
"accepted": accepted
}
Defining the Routing Operate
def feedback_router(state: ArticleState):
return "finish" if state["approved"] else "revise"
This may assist us loop again to Author Agent if the Article shouldn’t be ok, else it willbe accepted as the ultimate article.
LangGraph Workflow
graph = StateGraph(ArticleState)
graph.add_node("search", search_agent)
graph.add_node("curator", curator_agent)
graph.add_node("author", writer_agent)
graph.add_node("suggestions", feedback_agent)
graph.set_entry_point("search")
graph.add_edge("search", "curator")
graph.add_edge("curator", "author")
graph.add_edge("author", "suggestions")
graph.add_conditional_edges(
"suggestions",
feedback_router,
{
"revise": "author",
"finish": END
}
)
content_creation_graph = graph.compile()

We outlined the nodes and the perimeters, and used a conditional edge close to the suggestions node and efficiently made our Multi-Agent workflow.
Operating the Agent
consequence = content_creation_graph.invoke({
"subject": "AI regulation in India"
})
from IPython.show import show, Markdown
show(Markdown(consequence["article"]))

Sure! We have now the output from our Agentic System right here, and the output seems good to me. You’ll be able to add or take away brokers from the workflow in keeping with your wants. As an illustration, you may add an Agent for picture era as effectively to make the article look extra interesting.
Superior Multi-Agent Agentic System
Beforehand, we checked out a easy sequential Multi-Agent Agentic system, however the workflows can get actually complicated. Superior Multi-Agent techniques might be dynamic, with intent-driven architectures the place the workflow might be autonomous with the assistance of an Agent.
In LangGraph, you implement this utilizing the Supervisor sample, the place a lead node can dynamically route the state between specialised sub-agents or commonplace Python capabilities primarily based on the outputs. Equally, AutoGen achieves dynamic orchestration by means of the GroupChatManager. And CrewAI leverages the Course of.hierarchical, requiring a manager_agent to supervise delegation and in addition validation.
Let’s create a workflow to grasp supervisor brokers and dynamic flows higher. Right here, we are going to create a Author & Researcher agent and a Supervisor agent that may delegate duties to them and full the method.

Python Code
Installations
!pip set up -U langgraph langchain langchain-openai langchain-community tavily-python
Imports
import os
from typing import Literal
from typing_extensions import TypedDict
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, MessagesState, START, END
from langgraph.sorts import Command
from langchain.brokers import create_agent
from langchain_community.instruments.tavily_search import TavilySearchResults
from google.colab import userdata
Loading the API Keys to the Atmosphere
os.environ["OPENAI_API_KEY"] = userdata.get('OPENAI_API_KEY')
os.environ["TAVILY_API_KEY"] = userdata.get('TAVILY_API_KEY')
Initializing the mannequin and instruments
manager_llm = ChatOpenAI(mannequin="gpt-5-mini")
llm = ChatOpenAI(mannequin="gpt-4.1-mini")
tavily_search = TavilySearchResults(max_results=5)
Notice: We shall be utilizing a distinct mannequin for the supervisor and a distinct mannequin for the opposite brokers.
Defining the instrument and agent capabilities
def search_tool(question: str):
"""Fetches market information."""
question = f"Fetch market information on {question}"
outcomes = tavily_search.invoke(question)
return outcomes
# 2. Outline Sub-Brokers (Employees)
research_agent = create_agent(
llm,
instruments=[tavily_search],
system_prompt="You're a analysis agent that finds up-to-date, factual data."
)
writer_agent = create_agent(
llm,
instruments=[],
system_prompt="You're a skilled information author."
)
# 3. Supervisor Logic (Dynamic Routing)
def supervisor_node(state: MessagesState) -> Command[Literal["researcher", "writer", "__end__"]]:
system_prompt = (
"You're a supervisor. Determine if we'd like 'researcher' (for information), "
"'author' (to format), or 'FINISH' to cease. Reply ONLY with the node identify."
)
# The supervisor analyzes historical past and returns a Command to route
response = manager_llm.invoke([{"role": "system", "content": system_prompt}] + state["messages"])
resolution = response.content material.strip().higher()
if "FINISH" in resolution:
return Command(goto=END)
goto_node = "researcher" if "RESEARCHER" in resolution else "author"
return Command(goto=goto_node)
Employee Nodes (Wrapping brokers to return management to the supervisor)
def researcher_node(state: MessagesState) -> Command[Literal["manager"]]:
consequence = research_agent.invoke(state)
return Command(replace={"messages": consequence["messages"]}, goto="supervisor")
def writer_node(state: MessagesState) -> Command[Literal["manager"]]:
consequence = writer_agent.invoke(state)
return Command(replace={"messages": consequence["messages"]}, goto="supervisor")
Defining the workflow
builder = StateGraph(MessagesState)
builder.add_node("supervisor", supervisor_node)
builder.add_node("researcher", researcher_node)
builder.add_node("author", writer_node)
builder.add_edge(START, "supervisor")
graph = builder.compile()
As you may see have solely added the “supervisor” edge and different edges shall be dynamically created on execution.
Operating the system
inputs = {"messages": [("user", "Summarize the market trend for AAPL.")]}
for chunk in graph.stream(inputs):
print(chunk)

As you may see, the supervisor node executed first, then the researcher, then once more the supervisor, and at last the graph accomplished execution.
Notice: Supervisor Agent doesn’t return something explicitly, it makes use of ‘Command()’ to resolve whether or not to direct the immediate to different brokers or finish the execution.
Output:
inputs = {"messages": [("user", "Summarize the market trend for AAPL.")]}
consequence = graph.invoke(inputs)
# Print remaining response
print(consequence["messages"][-1].content material)
Nice! We have now an output for our immediate, and we will efficiently create a Multi-Agent Agentic Sysem utilizing a Dynamic workflow.
Notice: The output might be improved through the use of a inventory market instrument as an alternative of a search instrument.
Conclusion
Lastly, we will say that there’s no common system for all duties. The reply to picking Single-Agent or Multi-Agent Agentic techniques depends upon the use case and different elements. The important thing right here is to decide on a system in keeping with the duty complexity, required accuracy, and in addition the associated fee constraints. And ensure to orchestrate your brokers effectively if you’re utilizing a Multi-Agent Agentic system. Additionally, do not forget that it’s equally necessary to choose the proper LLMs to your Brokers as effectively.
Steadily Requested Questions
Sure. Alternate options embrace CrewAI, AutoGen, and lots of extra.
Sure. You’ll be able to construct customized orchestration utilizing plain Python, nevertheless it requires extra engineering efforts.
Stronger fashions can cut back the necessity for a number of brokers, whereas lighter fashions can be utilized as specialised brokers.
They are often, however latency will increase with extra brokers and LLM calls, so real-time use circumstances require cautious optimization and light-weight orchestration.
Login to proceed studying and revel in expert-curated content material.
