Building an AI agent is not a single tool decision — it is a stack decision. You need a framework to define agent behavior, an LLM provider for reasoning, a vector database for memory, orchestration to coordinate multi-step workflows, and monitoring to catch failures before your users do. The wrong choice at any layer compounds downstream.
We organized the AI agent ecosystem into 13 categories that map to the actual decisions teams face when shipping agents to production. Each category page includes every tool we track, with pricing, integration details, and the specific use cases where each tool fits best. No pay-to-rank, no sponsored placements — just the infrastructure landscape as it exists today.
Most production agent stacks converge on a similar architecture: an LLM provider handles reasoning, a framework manages agent logic and tool calling, a vector database stores context and memory, and an orchestration layer coordinates multi-step workflows. The choices that vary most between teams are at the edges — which gateway to use for provider failover, whether to run code execution in sandboxed containers or serverless functions, and how much observability to build in from day one.
The ecosystem is consolidating. Anthropic's Model Context Protocol (MCP), now managed by the Linux Foundation, is becoming the standard for tool integration. Framework adoption is concentrating around a handful of mature options. And the distinction between "platform" and "framework" is blurring as platforms add code-first interfaces and frameworks add visual builders.
If you are choosing your stack for the first time, start from the use case, not the technology. A customer service agent and a code review agent need different infrastructure even if both use the same LLM. Each category page links to the use cases it serves best, so you can work backwards from the outcome you need.