AI Development Tools Are Growing Up
AI Development Tools Are Growing Up
Last week, JetBrains announced that Cursor joined the Agent Client Protocol (ACP) registry and is now available as an agent inside JetBrains IDEs. Cursor, Anthropic’s Claude, OpenAI, Google; all plugging into the same protocol, all available from the same agent picker. I’ve watched technologies go through a similar adoption arc enough times to recognize what’s happening when an ecosystem starts to grow up.
Gartner’s Hype Cycle is the well-known model for tracking where a technology sits, and by that measure AI development tools are probably somewhere between the Peak of Inflated Expectations and the Trough of Disillusionment. But Gartner measures hype. What I care about is what practitioners actually experience when new technology arrives, and over 25 years I’ve lived that experience through a pattern different enough from Gartner’s model to be useful to me.
A Practitioner’s Adoption Lifecycle
-
Evaluation — “Is this worth further investment?” You’re kicking the tires. The technology is impressive in demos, but you’re trying to determine if it actually changes anything about how you work or whether it’s another tool that’ll collect dust. Most new technology doesn’t make it past this stage. (I don’t miss Google Wave, but boy was I excited about it!)
-
Isolated Adoption — “I’m in, but nobody has the playbook yet.” You’ve decided the technology has legs. You’re using it for real work, but adoption is murky. Going all-in risks vendor lock-in or workflow disruption, best practices don’t exist, and details are hard to come by. Everyone claims to be doing it; what they’re actually doing is personal discovery.
-
Convergence — “The tools start playing nice.” The ecosystem matures past winner-take-all. Interoperability is one way this shows up (e.g. tools talking through shared protocols like ACP) but convergence also includes tools absorbing each other’s ideas, practices aligning across communities, and the technology becoming compatible with the business structures or technical architectures it has to live inside.
-
Common Practice — “Now there are playbooks.” Shared usage patterns emerge that teams can start from and evolve. They’re not perfect, they aren’t permanent, but a well known starting point that you can adapt to your context rather than inventing from scratch.
-
Consolidation — “The field settles.” The strongest or most differentiated options survive; the rest fade or get absorbed. The landscape becomes navigable. You stop tracking every new entrant because the viable options are clear.
These aren’t discrete steps. Technologies blur between stages, and different parts of an ecosystem can be in different stages simultaneously. This is a mental model for knowing what questions to ask and how to navigate uncertainty, not a checklist to mark off.
Where AI Development Tools Sit
AI development tools are firmly in Isolated Adoption, with early signals of Convergence.
Copilot and ChatGPT were Evaluation. Genuinely impressive technology, but I still designed, built, and debugged the same way. Copilots gave me a tab-complete (that I eventually turned off) that was more enthusiastic than helpful. I remember having the mental image of a happy dog yapping at my heels while I was trying to read a technical book. Yes you’re cute, but you’re also noisy and not helping me think. And ChatGPTs gave me a faster way to search the internet by pasting code into a browser tab or asking a question in plain English instead of crafting the perfect search query. Technologically impressive, sometimes useful, but mostly a side activity.
Cursor and Claude Code moved us into Isolated Adoption. AI stopped being a tool I consulted and became a collaborator embedded in my workflow. For me, that’s primarily Claude Code in my terminal; for many of my colleagues it’s directly in their editor. Either way, it’s changing the shape of how we work. But the practices for using these tools well? I’m largely figuring those out on my own. So is everyone else. We’re all doing it; nobody has the playbook yet.
The ACP announcement is a signal of the crossing into Convergence. An agent built by one company can run inside an editor built by another, through an open protocol. That level of interoperability is one of the clearest signals that an ecosystem is starting to converge.
Every team I’ve led, I’ve treated editor choice as an engineering decision, not a team standard (.gitignored editor configuration files and relied on build tools, githooks, linters, and portable documentation to create consistency). The tools don’t need to be the same; the interfaces between them do. That principle only works when the ecosystem converges on shared protocols, and ACP extends it to the AI layer.
I’ve Seen This Before
Git is an example I keep coming back to. When Git arrived, it was almost immediately obvious that it was the future of version control. But knowing Git was the right tool and knowing how to use it were two different problems. Teams spent years mimicking Vincent Driessen’s gitflow model because it was the first widely shared pattern, then gradually adapting it, then eventually moving past it as a recipe as they understood Git’s design well enough to create workflows that fit their own context. Today, most teams can have an informed conversation about branching strategies because the practices matured alongside the tool. You can map Git’s journey cleanly: rapid Evaluation, long Isolated Adoption, gradual Convergence around shared patterns, and eventually Common Practice and Consolidation. That maturation took over a decade.
Git is also worth noting because it’s both a tool and a protocol. It’s the shared protocol that allows teams to choose GitHub, GitLab, or Bitbucket as their hosting platform without changing how they work. Git as a standard facilitated vendor flexibility even as the industry converged on it as the underlying technology. That’s the pattern ACP and MCP are following.
I remember what editor-based language support looked like before the Language Server Protocol. Every editor needed its own implementation — VS Code, Vim, IntelliJ, each building their own TypeScript plugin independently. LSP was a Convergence moment: implement a language server once, and it works everywhere. ACP is doing for AI agents what LSP did for language intelligence. MCP is doing for tool and context integration what LSP did for code analysis. Protocols turn a fragmented landscape into a composable one. That pattern holds: when tools start talking through shared protocols, the ecosystem is maturing, and what matters shifts from which product you chose to what practices you’re building.
The Cost of Convergence
I’ve been through enough protocol wars to know they don’t always resolve cleanly; SOAP versus REST, CORBA versus basically everything. ACP, MCP, and whatever comes next aren’t guaranteed to converge, and a genuine breakthrough in AI reasoning could reset parts of the cycle back to Evaluation overnight. You can already see the friction: tools have converged on markdown files in the repo for providing project context, but every tool has its own name for them (CLAUDE.md, COPILOT.md, .cursorrules). Publications and tools like SpecKit are emerging to help teams navigate usage patterns, which is itself a signal of movement toward Common Practice. But we’re partway through Convergence, not at the end.
And if my experience with Agile taught me anything, it’s that Convergence can be lossy. I was an early adopter of Agile practices, reading XP, Fowler, Beck, and Schwaber around 2000 while a grad school professor told me it was “a bunch of hippies who think you light incense and have software come out the other side.” I learned through books, blog posts, and meetups (pure Isolated Adoption). When Scrum arrived, it provided a powerful Convergence point. But Scrum won that convergence not because it was the most complete synthesis of what existed, but because it was the most communicable one. XP’s technical rigor and Kanban’s flow management got left behind for the majority. For many, “Agile” became synonymous with “Scrum,” and the nuances that experienced practitioners understood were lost in that packaging.
The risk with AI development tools is the same. “Vibe coding” has already become the industry’s casual shorthand, and while many of us are racing to replace it with terms that signal rigor (agentic engineering, spec-driven development, context engineering), vibe coding is still the term du jour. That’s the Convergence battle happening right now, and if Agile is any guide, the winner will be the most communicable framework, not the most complete.
There’s a useful distinction buried in Agile’s history: the difference between agile and Agile. Lowercase agile was a set of principles practitioners discovered through doing the work. Uppercase Agile became a proper noun (a packaged methodology, a training certification, a consulting industry). I’ve spent over 25 years in various teams and organizations trying to clear the murk between being agile vs. being Agile. I’m already having the same conversation about AI-assisted development, and I anticipate repeating it for the next decade. Whatever proper-noun framework wins that convergence, keeping the lowercase version (the first principles) in clear view is what matters.
Practices Over Products
I’ve watched enough technology cycles to know that the products that seem permanent during Evaluation rarely are. What lasts are the protocols, the practices, and the practitioners who learned to ask the right questions for the phase the ecosystem is actually in.
The tools are growing up. Whether the practices you’re building today will survive the next tool change matters more than which tool you picked.