When a startup is young, the founder ships fast. Part of this is that the product is still small. But there is another reason, one that gets less attention: the whole product fits inside the founder's head. Call it their mental context window.
They built the thing. They made the tradeoffs and often wrote the code. When a new idea comes up, they can immediately see where it belongs, what it touches, what it might break. They are fast not because they are exceptional. They are fast because they can reason from a nearly complete model.
As a product grows, the model becomes impossible to maintain. Parts were built years ago. Parts were built by other people. Parts still serve real use cases that no one remembers until they break. The product stops fitting inside any one person's mental context window.
And that is where a lot of hesitation comes from. Not indecision. Not lack of taste. Lack of loaded context.
Say you are building software that helps customers manage inventory. A customer asks for lot tracking, the ability to tie stock to specific batches, track expiration dates, trace which orders consumed which units. You know you want to build it. The customers need it, competitors have it, the decision is not hard. The question is not what. The question is how. Because lot tracking is not really one feature. It changes how stock gets received, consumed, corrected, audited, and reported. It touches every assumption the system makes about what a unit of inventory is,including assumptions buried in code nobody has thought about in two years, serving a use case that still matters even if nobody remembers why. That is the real work. Not writing the code. Understanding the current shape of the system well enough to change it without quietly breaking it.
This problem has always existed. But for most of software history, it had somewhere to hide. Building was slow enough that the time spent reconstructing context before each decision was a rounding error. The real constraint was implementation: more engineers, clearer specs, faster execution. As long as that was true, teams could mostly ignore the cost of having to relearn their own product before changing it.
AI removes that cover.
When code gets cheap to produce, the context problem does not go away. It just stops having anywhere to hide. The slow part is no longer generating the solution. It is arriving at a clear enough understanding to build from. Before you can add something, you have to know what you are adding it to.
Any product manager knows this feeling. An item on the roadmap is clearly right: the customer is asking for it, nobody is opposed, the decision is not close. And it still does not move. Not because anyone is blocking it. Because before you can move it forward, you have to answer questions the system itself does not surface. What does the current behavior assume? Which workflows depend on it? What would have to change to make room? That understanding is not written down anywhere. And the cost of skipping it is a feature that works perfectly in isolation and silently degrades something nobody thought to check.
This is also where AI gets more interesting.
Most of the excitement has focused on using AI to write code. That makes sense, it is the most visible change. But the right question is not just what AI made cheaper. It is what that revealed. Once coding is fast, the constraint that remains is understanding. Which is exactly what AI should help with next.
Not by making product decisions. By restoring the conditions under which good product decisions get made.
If a product manager wants to revisit inventory logic built three years ago, the ideal tool would reconstruct the relevant model: what the system currently does, which assumptions are embedded in it, what has changed since the original decision, and what a change would touch. Not the answer. The room where a good answer becomes possible.
Product choices still require taste. There are cleaner and uglier ways to solve the same problem, and a model is not going to resolve that. But taste depends on information. A good instinct operating on a partial mental model will still miss things that a weaker instinct, fully loaded, would catch.
For a long time, teams assumed slowness was a production problem. Add engineers, clear the backlog, ship faster. But a lot of what looked like a production problem was actually a comprehension problem, and the production bottleneck was just large enough to keep that hidden.
Now the part that was always hard is the only hard part left.