Who Is Developer Experience For?

March 2, 2026

In 2006, choosing a tech stack was mostly about how it felt to write code.

If a framework made developers happy, it won. If it felt heavy, verbose, or awkward, it lost.

That's part of why Ruby on Rails took off. It wasn't just powerful. It felt good. You could move fast. The syntax was expressive. It reduced friction. The developer experience was the product.

Now the person writing most of the code may not be a person.

Since late last year, agentic development has accelerated. Tools like Cursor and increasingly capable models have shifted the center of gravity. Developers who once typed everything now describe systems in English. The model fills in the rest.

If that trend continues—and it probably will—then the question changes.

If agents will write 80–90% of the code, what does "developer experience" even mean?

It used to mean how it feels for a human to type. Now it might mean how easy it is for a model to reason.

That's a different optimization target.

When humans wrote everything, ergonomic languages won. Expressiveness mattered. Boilerplate was painful. Concise DSLs were delightful. Teams split into backend and frontend partly because the skills and tooling were different, and partly because cognitive load demanded separation.

But when an agent writes the code, the cost of boilerplate collapses. Lines of code become cheap. Tokens replace keystrokes.

What becomes expensive instead?

Ambiguity.

Agents thrive on clarity. They perform better when structure is explicit. Strong types help. Clear boundaries help. Popular ecosystems help because the model has seen more examples.

This is one reason TypeScript keeps gaining ground. It gives structure without sacrificing ecosystem. It's not just pleasant for humans; it's legible for machines.

For agents, a large ecosystem means something concrete: training data. The more code that exists in a language, the more examples the model has seen. That's not a side effect of popularity. It's the mechanism. GitHub Copilot generates TypeScript more reliably than it does Haskell—not because Haskell is worse, but because there's an order of magnitude more TypeScript in the training data.

The same logic affects architecture.

For years, the industry oscillated between monoliths and microservices. Monoliths are simple but grow messy. Microservices scale but introduce coordination overhead.

Now there's a new variable: context windows.

A monolith gives an agent holistic visibility. Everything lives in one place. But that also means the codebase can exceed what a model can load at once. Microservices reduce surface area. Smaller services mean smaller context. That can be easier for agents to reason about.

The result is not a clear winner. Instead, something pragmatic is emerging: modular systems with clean boundaries. Call it a modular monolith. Call it disciplined services. The label matters less than the property: local clarity. Each service should be understandable without loading the whole system. If an agent has to read five files to understand one function, that's a structure problem.

Now imagine you're sitting on an old stack.

You're running Ruby on Rails 4.2. You have legacy Ember.js on the frontend. You've accumulated technical debt. You were already planning to modernize.

Before agents, the obvious move was incremental. Upgrade Rails step by step. Reduce risk. Stay close to what the team knows.

But agents change the calculus.

If migrations become cheaper—because models can rewrite large portions of code safely—then foundational decisions become less constrained by labor cost. You can ask a bigger question.

Not "What's easiest for us today?"

But "What stack will age best in an agent-dominated world?"

That doesn't automatically mean rewriting everything in Rust. Performance and safety are attractive, but ecosystem maturity and model familiarity matter more than theoretical advantages.

Agents are strongest in widely used stacks. TypeScript. Python. Modern Rails. React. Next.js. They have dense documentation. Large training footprints. Predictable patterns.

Exotic stacks are not impossible. They're just less reinforced. The teams that will struggle are the ones locked into frameworks that made sense in 2015 but have thin model support today. Not because their code is bad. Because the model has seen almost none of it.

There's also a second constraint that doesn't disappear: humans still own the system.

Agents can generate code. They don't own accountability. Your team will debug, reason about architecture, and make tradeoffs. A stack that is optimal for models but alien to your engineers creates a new bottleneck.

So the decision space shifts, but it doesn't invert.

Before, developer happiness dominated. Now, agent legibility and ecosystem density rise in importance. But human maintainability remains.

The most robust strategy is not radical reinvention. It's strategic alignment. Move toward stacks that are strongly typed, widely documented, modular with clean boundaries, and compatible with the tooling ecosystem emerging around agents.

That's why moving from Ember to React with TypeScript makes sense. Not because Ember is bad, but because the gravitational pull of the ecosystem matters. Agents perform better where examples are abundant.

Upgrading Rails to a modern version likely makes more sense than abandoning it entirely—unless there is a non-negotiable architectural constraint pushing you elsewhere.

The key insight is this:

In the next five years, the cost of writing code will approach zero.

The cost of choosing the wrong abstraction will not.

So optimize for clarity. For a decade, we optimized codebases for human typing speed. That was the right target. It might not be anymore.

Developer experience is no longer just about how it feels to type.

It's about how well your system can be understood—by both kinds of programmers.