For a long time, startups didn't really have to answer the question of how many engineers they needed.
In theory they did. There was always a spreadsheet somewhere. There was always some plan about hiring, roadmap capacity, and what the team would look like in twelve months.
But in practice, the answer was usually determined by simpler things.
How much money had you raised? How many good people could you recruit? How quickly could you onboard them? How much chaos could the team absorb without breaking?
Those constraints did most of the thinking for you.
If you were a startup with ten engineers, you usually did not have ten because someone had carefully concluded that ten was the equilibrium point between market opportunity, product complexity, and organizational capacity. You had ten because that was what you could afford, recruit, and manage.
That was close enough.
AI changes this.
It does not remove the need for engineers. But it does change what one engineer can do. A developer used to be a line worker in a software factory. Now each developer is starting to look more like the manager of one. That changes the economics of capacity.
If one engineer can now produce much more software, then the old question of how many engineers you need becomes harder, not easier.
The first reaction is to say that this is great. Instead of hiring another developer, you can spend that money on tools. If four developers with AI can produce what five or six used to produce, then that looks like an obvious trade.
And often it is.
But only for a while.
At some point there are diminishing returns. Giving a developer more tokens is not the same as giving them more judgment. It is not the same as giving them better taste.
So the question becomes: what is the right capacity for a team when producing software is no longer the main bottleneck?
This is a different question from the one startups used to ask.
The old question was mostly financial. How many engineers can we afford?
The new question is strategic. How much change should this company produce?
That sounds abstract, but it is actually more practical than the old one.
When software was expensive to produce, the answer was usually “as much as possible.” The bottleneck was implementation. If you could afford more capacity, you wanted it.
When software becomes cheap, implementation stops being the main constraint. Other constraints become visible.
Judgment is one of them. Customers will always tell you what they want. But what a customer asks for is not always the problem worth solving. Getting to the root of what they actually need — the real job to be done — was always hard work, and it still is. When building was expensive, the cost of building wrong enforced that discipline. Now it doesn't. A cheap solution to the wrong problem is not progress.
Keeping the system coherent is another. The faster the codebase changes, the more important it becomes that the parts still fit together.
So is change management. Customers can only absorb so much change at once. Shipping faster is not always better if it outruns the user's ability to understand the product.
But the deepest constraint is complexity.
This one matters the most.
Every feature makes future features more expensive.
Not because writing the next feature takes more lines of code. AI may make that part cheap. But because every new feature adds another workflow, another interaction, another way to break something else, another piece of the product that someone has to understand.
Software is unusual because it creates future cost out of present abundance.
The easier it becomes to build, the easier it becomes to overbuild.
That means the bottleneck shifts again. It is no longer just about whether a team can produce more. It is about whether the company can absorb what the team produces without making the product worse.
This is why many of the old capacity models feel less useful now.
They were often built around the cost of engineering labor. But if a small team can now produce the output of a much larger one, then headcount by itself stops being a very good proxy for capacity.
A company may not need more engineers. It may need better product judgment.
Not in the sense of building less. The underlying problems customers have do not shrink. But a solution is just one way to address a problem, and many solutions are wrong. When software was expensive, that mattered: you could not afford to build the wrong thing, so you worked harder to understand the right one. Now you can skip that work. You can build quickly enough that bad diagnosis looks like productivity, at least for a while.
That is a different kind of management problem. The bottleneck is no longer the engineers. It is whoever decides what they should build.
It also means that a lot of startup hiring in the past was less principled than it looked. There were models, but the real logic was often simpler: hire against the budget, push as hard as possible, and hope speed solves enough things before money runs out.
That worked because the constraints were external.
Now the constraints are becoming internal.
You may be able to produce software much faster than before. But that does not mean you should.
There is some rate of change at which a product improves. Beyond that rate, the product starts to lose coherence. The team starts spending more time navigating what it already built. The customer starts seeing motion instead of progress.
That is the new equilibrium.
The interesting thing about AI is not just that it makes software cheaper. It is that it forces companies to finally develop a real theory of product value: not how much software to produce, but how to keep adding utility without letting complexity outpace it.
Before, they could let external constraints answer that for them.
Now they have to answer it themselves.