Orchestration topologies
Sequential pipelines
Agent A feeds agent B.
Video coming soon
Agents on a conveyor belt
The simplest multi-agent topology is a sequential pipeline: agent A does its job, hands off to agent B, who hands off to agent C. Output of one is input to the next. No branching, no fan-out, no orchestrator deciding who to call. The sequence is fixed at design time.
This is the "least multi-agent" of the multi-agent topologies. It is easy to build, easy to debug, and the right answer surprisingly often.
What a pipeline looks like
def pipeline(user_input):
research_findings = research_agent(user_input)
draft = writer_agent(research_findings)
edited = editor_agent(draft)
return editedThree agents, three handoffs, one output. The structure is so flat that you barely notice it is a multi-agent system. Each agent has its own system prompt and its own tools, but the orchestration is just function composition.
A more realistic version uses the structured handoff schema from the previous module:
def pipeline(task):
h1 = research_agent.run(task)
h2 = writer_agent.run(Handoff(
from_agent="research-agent",
to_agent="writer-agent",
intent="Write a 400-word article based on these findings",
context=h1.summary,
artifacts={"sources": h1.sources},
))
return editor_agent.run(Handoff(
from_agent="writer-agent",
to_agent="editor-agent",
intent="Polish for tone and remove clichés",
context=h2.draft,
artifacts={},
))Same shape, with explicit handoffs.
When sequential is the right call
Three patterns fit pipelines well:
1. The work is genuinely sequential
If step B cannot start without the output of step A, a pipeline is a literal description of the work. Forcing branching or a supervisor on top adds machinery without adding value.
2. Each step has a clear specialist
If the steps map cleanly to specialized agents (a researcher, a writer, an editor), pipelines let each agent see only the prior agent's output rather than its full reasoning trace. This is the structured-handoff benefit from the last module, applied at the system level.
3. The output of each step is small
Pipelines work best when each handoff is a paragraph or a structured payload, not a transcript. If your pipeline's first agent produces a 5000-token research dump that all needs to flow into agent two, you may want a different topology, or a separate summarization step before the handoff.
Where pipelines break
Three failure modes:
Errors propagate forward
If the research agent gets the topic slightly wrong, the writer's draft is also wrong, and the editor polishes a wrong draft. There is no path back. Pipelines have no recovery mechanism unless you build one explicitly.
The fix is to add an evaluation step (or a judge agent) between hops, but that is the road toward supervisor/worker (next lesson). At some point your pipeline has so many checks that it has implicitly become a different topology.
No agent can ask follow-ups
In a pipeline, agent B cannot say "can you re-research with a tighter scope?" to agent A. It only sees A's output. If B realizes it needs more, it can either guess or fail. Both are bad.
The structure is rigid
Adding a new step in the middle of a pipeline requires changing the wiring. Pipelines are not great at evolution. If your workflow is changing weekly, a more flexible topology will save you maintenance time.
Pipelines vs sequential pipelines vs DAGs
A pure pipeline is a single linear chain. Some workflows are almost linear but have small branches (one step that needs two parallel inputs). Those are technically DAGs (directed acyclic graphs) and you can model them as a small extension of the pipeline pattern:
research = research_agent(task)
fact_check = fact_check_agent(research) # parallel
sentiment = sentiment_agent(research) # parallel
draft = writer_agent({
"research": research,
"fact_check": fact_check,
"sentiment": sentiment,
})Two parallel agents read from the same input, then their outputs converge. This is still essentially the pipeline mindset: fixed topology, no orchestrator deciding what to run. Just a slightly richer graph.
If your DAG starts to look like spaghetti or your branching depends on runtime decisions, you have outgrown the pipeline mindset. That is when supervisor/worker becomes the right shape.
Pipelines vs other topologies
| Property | Sequential | Supervisor/worker | Hierarchical | Swarm |
|---|---|---|---|---|
| Topology decided at | Design time | Runtime | Runtime | Runtime |
| Recovery from bad output | None unless you add it | Built in | Built in | Emergent |
| Easy to debug | Very | Medium | Hard | Very hard |
| Adapts to varied requests | Poorly | Well | Well | Variably |
| Right for | Fixed workflows with clear stages | Most general agent work | Big systems with sub-domains | Special cases |
Pipelines are the boring answer. For workflows that look like content production, structured ETL, or any "step 1, step 2, step 3" process where the steps do not change, they are also the right answer.
Pipelines as a stepping stone
A useful design exercise: if your problem fits a pipeline, build the pipeline first, even if you suspect you will need supervisor/worker eventually. The pipeline forces you to make each step's input/output contract explicit. Those contracts are exactly what you need when you migrate to a more dynamic topology later. The wasted work is small; the clarity gain is large.
A common anti-pattern: the "smart" pipeline
The temptation when a pipeline is not quite working is to make each step "smarter" by giving it more context: the writer gets the original task plus the research findings plus the user's history plus the brand guidelines plus the editor's previous comments. Now each step is a monolith again, and you have a long pipeline of monoliths.
Resist this. The whole point of the pipeline is that each step has a tight, scoped responsibility. If a step needs more information, ask whether the prior step should produce it as a structured artifact, not whether the current step should be expanded.
Key takeaway
Sequential pipelines are the simplest multi-agent topology: A then B then C, fixed at design time. They fit fixed workflows, fail at varied requests, and have no recovery without explicit machinery. Pipelines also make great training wheels: building one forces you to define handoff contracts that pay off in any later topology. The next lesson moves to supervisor/worker, the topology that handles varied requests by deciding at runtime who runs.
Done with this lesson?