Software Trades: AI Won't Replace Engineers; It Might Reorganize Them
Software Trades: AI Won’t Replace Engineers; It Might Reorganize Them
The loudest narrative about AI and software development right now is consolidation. Non-engineers describe what they want, AI builds it, and the industry collapses into a handful of platforms. In this vision, the engineer is optional, a relic of a more primitive era, like the switchboard operator or the typesetter.
I think this narrative is largely marketing hype, and at the very least incomplete enough to be misleading.
Some of today’s AI companies will be tomorrow’s Google. Others will be tomorrow’s Alta Vista. The same is true for the visions of how software gets built. The “talk into a microphone and software appears” future might happen, but it’s at least as likely that it’s a marketing pitch that doesn’t survive contact with reality. There’s an alternative future that gets far less attention. One where AI doesn’t replace engineers but restructures how engineering work is organized. Not fewer engineers, but differently organized ones. An explosion of small, independent software shops operating more like tradespeople than permanent employees.
To understand why, it helps to think about houses.
Why Software Teams Are Permanent
If you need a home built, you engage a team: architects, structural engineers, designers, and tradespeople. They build it. When the first version is done, you can release them. You might go years without needing an architect or an electrician again. When you eventually want a renovation, you re-engage, and there’s no requirement to hire the same people. Any licensed electrician can work on any house, because building codes create standardization and blueprints provide context.
Software doesn’t work this way. If your company builds a piece of software, you’re near-required to keep a team employed, contracted, or on retainer indefinitely. This isn’t because software people figured out a better racket. It’s structural, driven by two forces that have no meaningful equivalent in physical construction.
Software demands continuous change. A home sits on a foundation governed by physics, which doesn’t change (natural disasters notwithstanding). Software sits on operating systems, browsers, libraries, APIs, and cloud infrastructure, all of which change constantly and without your permission. You can build a perfect piece of software today and, without touching a line of code, it can break six months later because a dependency released a breaking change or a TLS version was deprecated. The physical world has its own breaking changes (earthquakes, hurricanes, floods) but geological and meteorological cycle times are measured in decades or centuries, not in a corporation’s quarterly decision to end-of-life a product.
Beyond forced change, the low marginal cost of deploying software creates a demand cycle that physical construction never faces. Nobody calls their general contractor asking them to hotfix a breakfast nook into the kitchen while the crew is in the middle of applying the last coat of paint. But product managers routinely ask for equivalent-scope changes to software because deployment cost approaches zero. The business can’t stop asking for more, precisely because “more” feels cheap to deliver.
Bespoke codebases create enormous switching costs. Construction is governed by building codes: standardized, universally taught, inspectable by any licensed professional. A qualified electrician can walk into any home and understand the wiring because it follows known standards.
Software has no equivalent. Every codebase is an artisanally crafted combination of languages, libraries, and frameworks that has been hastily and inconsistently slapped together. Each carries its own architecture, conventions, and undocumented decisions. Replacing a software team isn’t like hiring a new plumber. It’s like hiring a new architect who has to reverse-engineer the structural logic of an already-built house from the inside. No blueprints, and the only indication that a wall is load-bearing is a scrawled note behind the drywall that reads “DO NOT TOUCH — Brian 2019.”
The construction analogy isn’t perfect. Commercial buildings do require ongoing facilities teams, and as homes get smarter with IoT, they’re starting to look more like software. But the core observation holds: these two forces, continuous change and high switching costs, lock companies into retaining software teams indefinitely.
I’ve spent the majority of my career doing project-based work for companies as small as self-funded startups and as large as Fortune 500 clients, and I’ve lived the switching cost problem repeatedly. On one engagement, my team joined a project mid-flight and found a sprawl of cloud-native services across Kubernetes and Lambda with no clear rationale for what ran where or why. When we asked “where do I go to understand what this system is supposed to do?”, the answer was “JIRA.” And by JIRA, they meant piecing the system together from ticket descriptions, linked documentation that was outdated the moment it was written, and comment threads where the real decisions were buried. The ramp-up wasn’t days or weeks; it was months of architectural spelunking. That cost is real, and it’s why companies default to keeping teams around.
AI Is Eroding the Switching Cost
AI doesn’t eliminate the first force. Software will still demand continuous change for the foreseeable future. But AI is beginning to erode the second force (the switching cost) in two ways that together could restructure the industry.
AI collapses ramp-up time. AI coding tools can ingest a codebase and provide meaningful context nearly instantly. What used to take weeks of knowledge transfer (understanding the architecture, learning the conventions, figuring out why things are built a certain way) can compress to hours when an AI assistant can answer those questions from the codebase itself. The tribal knowledge problem diminishes when the “tribe” includes an AI that has read every file. Recently, I cloned multiple repositories with zero documentation on how they fit together. With only AI assistance, I had a working local installation by lunch.
AI makes documentation economically viable. This is the deeper mechanism. Software has always needed the equivalent of construction blueprints and building codes: comprehensive documents that capture why a system exists, how it’s designed, and how the team works. Think Software Solution Documents that describe a system from business context through technical architecture, Architecture Decision Records that capture the reasoning behind technical choices, and SDLC guides that define how the team operates.
The industry has known documents like these are valuable for decades. Every methodology prescribes them in some form or another. But authoring these documents was painful. Engineers context-switched between prose, diagrams, and code across disconnected tools, producing artifacts that were stale the moment they were finished. This pain spawned decades of fads: generate code from diagrams, generate diagrams from code, or (the most popular) invoke “Working Software over Comprehensive Documentation” as permission to skip documentation entirely. And the engineer writing the document rarely benefited from it. Documentation was almost always authored for someone else: the next team, the auditor, the onboarding hire who might show up in six months. The person doing the work got nothing back for the effort. So the documents joined the Confluence graveyard — last modified in 2019, three lifetime views, two of which were the author.
AI collapses that cost. Producing these artifacts with AI assistance is considerably cheaper, and maintaining them is a near-joyful activity rather than the dreaded chore it’s been for decades. The economics flip from “luxury we can’t afford” to “investment that compounds.”
There’s renewed interest in the old “generate diagrams from code” techniques under the premise that AI will get it right this time. Maybe. But what I find more compelling is AI as the engineer’s extended brain, not replacing the documentation effort, but easing the pain of keeping documentation accurate, ensuring the codebase adheres to what was documented, and updating both when they deliberately diverge.
AI helps you create the document. The document then becomes context that makes AI more effective. On a current project, I maintain a Software Solution Document in markdown in the code repository specifically to give context to AI coding tools. The AI assistant uses that context to make informed design decisions, predicting what a useful dashboard should contain based on understanding the system’s purpose, speaking in business domain terminology without additional prompting, producing richer specifications because it understands the business problem the system solves. None of that happens without the document; and unlike its predecessors in the Confluence graveyard, this one has a consumer that reads it every time.
The second audience, AI tooling that reads the same documents humans do, is new. It changes the ROI calculation for documentation entirely.
The Quality Argument
Software engineering has always had a quality problem. We use the title “engineer,” but software engineers aren’t engineers the way electrical engineers, civil engineers, or mechanical engineers are. There’s no apprenticeship under a licensed professional. No certification requirement. No personal liability when incompetence leads to production failures. The industry’s hunger for developers has consistently outpaced its interest in professional standards. If our electric grid, airports, or even low-cost housing were engineered with the same rigor as most software systems, there would be a national emergency.
Most of the AI productivity conversation focuses on speed: build faster, ship more, do it with fewer people. There’s an underappreciated alternative: build better.
An experienced engineer with AI assistance can now write thorough tests because the tedium barrier is gone. Can produce documentation that actually reflects the system. Can catch edge cases they’d have skipped under time pressure. Can refactor confidently because the AI helps validate behavioral equivalence. Previously, I rarely felt I had the time to thoroughly document technical decisions. Now, when I realize I’ve made a significant architectural decision that deserves an ADR, creating it after the fact and pulling in the alternatives we’d considered from the session history is simple and natural. That kind of quality practice used to be the first thing cut under time pressure. The engineer doesn’t become unnecessary; they become more valuable, because AI amplifies the quality practices that compound over time.
The flip side matters too. Take the experienced engineer out of the seat and AI accelerates the production of wrong things at high speed. Without judgment about what to build, how to structure it, where the risks are, and which corners not to cut, you get ever-enlarging stacks of slop accumulating. The quality thesis depends on the human staying in the loop as the decision-maker.
This shifts the bottleneck. It moves from “can you write the code” to “can you make the right decisions about what to build and how.” Systems thinking becomes more valuable than knowing a specific language or framework. The ability to evaluate tradeoffs, spot risks, and design for evolution matters more than the ability to produce syntax.
The Trades Model
If switching costs collapse, the economic structure of software development can shift. Instead of permanent in-house teams, you get project-based shops, the same way construction, automotive repair, and healthcare already operate.
What might that look like?
Maintenance shops handling security patching, dependency updates, monitoring, and incident response. Renovation firms doing feature work, modernization, and re-platforming. Specialists in performance optimization, accessibility, compliance, or data engineering. General contractors coordinating multiple shops on larger efforts.
This isn’t hypothetical pattern-matching. The project-based consulting model I described earlier is the proto-version, limited by the very ramp-up costs this article is about. Every new engagement started with weeks or months of knowledge transfer, which meant the model only penciled out for engagements large enough to absorb that overhead. Collapse that cost, and the model becomes viable at much smaller scopes. A version of consulting that works for a two-week maintenance engagement, not just a two-year platform build.
What stays in-house is the product and technical leadership layer: the people who understand the business, set technical direction, and decide what to build. The homeowner, or the general contractor, who coordinates the trades. That role becomes more important, not less.
What enables it is the self-describing codebase. Repos that carry their own context, not just clean code and good tests, but Software Solution Documents, Architecture Decision Records, and SDLC guides that tell any practitioner the why and the how, not just the what. The documentation that AI makes viable to produce is the same documentation that makes the trades model viable to operate.
What I Don’t Know, and What I Believe
The construction analogy is imperfect; business domain knowledge is harder to codify than building codes, trust and verification are harder in software than in plumbing, and the kind of standardization that makes the trades model work in construction may need to emerge in software before this future fully arrives. Maybe AI drives that standardization by creating economic pressure toward convention-following, AI-readable codebases. Maybe it doesn’t.
Estimating when a product will launch is imperfect. Predicting the evolution of an AI-driven world, even more so. Maybe the ideas in this article will turn out to be the Alta Vista of the future. But I’m betting on distribution over consolidation. I’m betting on the value of keeping humans in the driver’s seat over relinquishing control. I’m betting that higher quality is how we’ll effectively guide our agentic teammates.