The Open Source Agent Wave: What 24,000 GitHub Stars in 30 Days Mean for Workflow Builders
Published: April 17, 2026
April 2026 delivered the clearest signal yet that agentic AI is moving from research experiments to infrastructure: four major open source agent frameworks accumulated over 24,000 GitHub stars in 30 days.
- google/adk-python (Agent Development Kit) — 8,200+ stars, multi-agent orchestration for Python
- meta-llama/llama-stack — 6,400+ stars, standardized Llama 4 deployment and inference
- openai/codex-cli — 5,800+ stars, desktop-integrated background agent streaming
- block/goose — 4,900+ stars, local MCP agent with extensible tool catalog
This isn't hype. These are engineers solving real problems with real code, publicly. The star velocity reflects developer demand for agent infrastructure — not marketing.
The question for teams building on top of agents isn't whether to use them. It's: what does the abstraction layer above the framework look like?
What Each Framework Actually Solves
google/adk-python
Google's Agent Development Kit is an opinionated multi-agent orchestration layer built for Python. The key design decisions:
- DAG-based execution: agents are composed as directed acyclic graphs, with typed inputs and outputs at each node
- Built-in evaluation: the framework ships with evaluation tooling to compare agent outputs across prompt variations and model versions
- Multi-agent coordination primitives: sequential, parallel, and hierarchical agent patterns with explicit handoff contracts
- Gemini-native but model-agnostic: first-class Gemini support with connector interface for OpenAI, Anthropic, and others
ADK is Google's answer to the question: what should a production Python agent runtime look like? The answer is structured, typed, and observable by default.
meta-llama/llama-stack
Llama Stack is not an agent framework in the traditional sense — it's a deployment and inference standardization layer. The goal: make Llama 4 models behave consistently across cloud, on-premises, and edge deployment targets.
The relevant piece for workflow builders: Llama Stack's inference layer is model-agnostic at the API level. A workflow that calls a Llama Stack endpoint works whether the underlying model is Llama 4, a fine-tuned derivative, or a future open-weight model. This decouples workflow logic from model choice in a way that API-first cloud providers don't offer.
openai/codex-cli
The Codex CLI is the agent runtime most focused on developer workflow integration. The April 2026 update shipped background agent streaming — long-running agent tasks that stream incremental results to a terminal while the developer continues working.
The architectural insight: background streaming separates task dispatch from task results. You launch an agent task, get a task ID, and poll or subscribe for results. For coding agents, this means "fix this bug class" and "analyze this codebase" become async operations, not blocking prompts.
block/goose
Goose is a local-first MCP agent. It runs on your machine, connects to any MCP-compliant server, and executes tool-using agent workflows without a cloud intermediary.
The differentiator is MCP-native tool discovery: point goose at an MCP server URL and it reads the tool catalog automatically. With the MCP ecosystem now crossing 10,000 public servers, this means a local agent can access database connectors, code executors, file systems, and external APIs by URL without any custom integration code.
The Pattern Emerging From All Four
Each framework makes different tradeoffs on runtime (Python vs. CLI vs. local), target user (developers vs. enterprises vs. researchers), and model affinity (Google vs. Meta vs. OpenAI vs. model-agnostic).
But all four converge on the same architectural primitives:
1. Graph-based or DAG-based workflow composition
Every framework above models multi-step agent logic as a graph, not a flat prompt chain. Nodes have typed inputs and outputs. Edges define execution flow. This makes workflows debuggable — you can inspect the state at any node.
2. Background async execution
Sequential prompt-response cycles are replaced by async task dispatch. Agents run to completion, stream results, and return structured outputs. This is how you scale from toy demos to real production use.
3. Dynamic tool discovery via MCP or similar
Hard-coded tool lists are being replaced by dynamic discovery from registries and servers. This makes tool capabilities a runtime property, not a compile-time constraint.
4. Built-in observability hooks
Every framework ships middleware, callbacks, or tracing hooks for instrumenting execution. This is table stakes — teams building production systems need to see what agents are doing at each step.
The Abstraction Gap These Frameworks Leave Open
Here's what none of these frameworks provide: a visual interface for composing, testing, and iterating on workflows.
Every framework above is code-first. To use ADK, you write Python. To use codex-cli, you write shell commands. To use goose, you configure YAML and run a CLI.
This is the right foundation layer. But it creates a gap: the people who design workflows and the people who implement them are often not the same people. A product manager can't open a DAG definition file and understand the execution flow. An analyst can't modify a multi-agent pipeline by editing Python.
The history of every infrastructure wave follows the same pattern: low-level primitives first, visual tooling next. Kubernetes before Helm. Git before GitHub. SQL before every visual database tool ever built.
The agent framework wave is at the low-level primitives stage. The visual execution layer is the next step.
Why This Wave Matters for AgenticNode
AgenticNode is a visual agentic workflow editor built on the execution model these frameworks have now standardized.
The convergence on graph-based workflows, async execution, MCP tool discovery, and middleware observability isn't coincidental — it reflects what production agent systems actually need. Every framework reinvented these patterns independently and arrived at the same conclusions.
AgenticNode surfaces these patterns as a visual canvas:
- Workflow nodes represent agent steps with typed inputs and outputs — the same DAG model as ADK and Agent Framework 1.0, rendered as draggable nodes you can wire together without writing Python
- Real-time execution traces show token costs, reasoning steps, tool invocations, and output at each node as the workflow runs — the observability layer that framework hooks expose but don't surface
- 42 real tools available directly in the canvas, from HTTP requests to code execution to CSV parsing to database queries — the tool catalog that goose and ADK build dynamically from MCP, available without server configuration
- BYOK model routing lets you point any node at any LLM provider — the same provider-agnostic model selection that all four frameworks above now support
The frameworks define the contract. AgenticNode implements the interface on top of that contract that makes it accessible to people who aren't writing DAG definitions in Python.
The Velocity Signal
The 24,000 stars in 30 days aren't just a GitHub metric. They represent:
- Engineering teams evaluating and committing to these as their agent runtime
- Open source contributors building ecosystem tooling around each framework
- Enterprise evaluations that start with GitHub exploration
- Developer communities forming around each tool's paradigm
The wave validates that agent workflows are a solved problem at the infrastructure layer. What's not solved: making that infrastructure accessible to the people who need to build on top of it without becoming agent framework experts.
That's the workflow builder's moment.
What Teams Should Do Now
If you're an engineer building agent systems:
Evaluate google/adk-python and block/goose before committing to a framework. ADK is opinionated and structured with excellent evaluation tooling — good for teams that want convention. Goose is flexible and MCP-native — good for teams that want to compose tools dynamically.
If you're building products on top of agents:
The framework convergence means your model integration layer is now a commodity. The differentiation is in the workflow composition experience, the observability, and the time from "I have an idea" to "this is running in production." Build there.
If you want to run agent workflows without writing framework code:
AgenticNode's canvas gives you the DAG-based workflow model, real-time observability, and 42 production tools available now, without configuring a Python environment or reading framework documentation.
Summary
April 2026's open source agent framework surge — google/adk-python, llama-stack, codex-cli, goose — signals infrastructure-level maturation for agentic AI:
- Graph-based workflows are the standard model — every serious framework converged independently on DAG-based execution
- Background async is replacing prompt-response — agent tasks are now fire-and-forget with streaming results
- MCP tool discovery is replacing hard-coded tool registration — tool catalogs are runtime, not compile-time
- Observability hooks ship by default — but raw hooks don't equal user-facing visibility
- The visual abstraction layer is the next wave — low-level primitives are here; the tool that makes them accessible to non-framework-experts is the next market
The developer demand is real. The infrastructure is ready. The question is whether the tools built on top of this infrastructure match the sophistication of the underlying layer.