You're Not a Coder Anymore. You're a Solver. Welcome to 2026.
How domain knowledge, business fluency, and curiosity became your most valuable technical skills
Spotify’s co-CEO dropped something during an earnings call a few weeks ago that should have stopped every developer mid-scroll. His most senior engineers, the best people they have, haven’t written a single line of code since December. Not burnout. Not sabbatical. They just don’t need to.
They’re shipping production features to the iOS app from Slack, on their morning commute, before reaching the office.
There is no longer a reasonable debate about whether this changes how we work. The question worth asking is: if typing code is increasingly not the job, what is? And the even harder question underneath that: who are you, if not someone who writes code?
Wait, does any of this actually work, though?
Before the existential crisis fully kicks in, let’s be honest about something.
The same week Spotify is celebrating its AI-powered commute deployment, engineering teams are quietly patching production issues introduced by their AI agent three PRs ago. There are developers who let Claude generate an entire module, merge it because the tests passed, and found out two weeks later that the tests were also generated and were testing the wrong thing entirely.
This is real. Confident, wrong AI output is arguably harder to catch than uncertain, wrong human output, because at least junior developers occasionally write “// not sure about this” in their commits. The AI never writes that. It ships with full conviction and impeccable formatting.
So is the problem the tools, or the people using them?
Honestly, probably both. But I’d lean harder on the second one. The tools have gotten dramatically better. What hasn’t kept up is the practice of using them well. Most developers picked up AI coding the same way they picked up Stack Overflow in 2012: copy, paste, pray, move on. That worked fine when the output was a 12-line function. It works less well when the output is a complete authentication flow.
The irony is that AI rewards exactly the engineering discipline people tend to skip when they’re moving fast. Clear specs. Real tests. Actually, reading the code before merging. When you skip those things while coding manually, you pay a small tax. When you skip them while directing an AI agent, you pay compound interest.
The part I keep watching happen and cringing
Senior developers, experienced people, using AI and merging generated code without really reading it. The PR is green. The review is a formality. And nobody catches the subtle wrong assumption buried in the middle of 200 lines of perfectly formatted TypeScript.
That’s not an AI problem. That’s a thinking problem in an AI costume.
Nate Jones made a sharp observation about this: everyone is so busy debating whether AI replaces or augments engineers that nobody is asking what engineering actually is. And when you ask that question seriously, you realize the stuff AI can’t do is exactly what engineers are supposed to be for in the first place.
From coder to solver. What does that actually mean?
Here’s the reframe that helps me think about this clearly.
A coder’s value is in the output: working code, fast, correct. An AI agent is a better coder than most of us, in most situations, right now. That’s just true, and pretending otherwise is wasted energy.
A solver’s value is in the input: understanding what problem actually needs to be solved, for whom, under what constraints, with what tradeoffs. That’s something completely different. And it’s something AI is genuinely bad at, because it requires context that doesn’t live in a codebase.
Nate’s broader point is that the barrier to building something has shifted. It’s no longer syntax or knowing the right APIs. It’s curiosity about the actual problem, comfort with ambiguity, and the willingness to iterate toward something that makes sense. Those are more like liberal arts skills than computer science skills. Which is uncomfortable to say, but also kind of interesting.
What does that look like in practice for a senior developer?
Domain knowledge over framework knowledge
Knowing how your industry actually works, how your users think, and what constraints your business operates under is increasingly what makes your AI sessions produce something useful rather than something generic. The agent can generate a booking system. Only you know that “booking” in your context means three different things depending on which team is reading the code.
Business fluency is a technical skill
The developer who can sit in a product conversation, understand the actual goal behind a feature request, and push back with “that’s solving the wrong problem” is worth more in 2026 than the developer who can implement whatever spec lands on their plate faster than anyone. One of those developers is a coder. The other is a solver.
Curiosity about the problem, not just the implementation.
This is Nate’s point, and it’s the hardest one to operationalize. It means spending more time before the first prompt asking, “What are we actually trying to do here?” It means being the person who notices the feature request doesn’t match the user’s actual behavior. It means investing in understanding the domain more than the tooling, because the tooling will change again in six months anyway.
Soft skills that are actually hard.
Stakeholder conversations. Knowing when to push back on scope. Reading a room when the technical direction is wrong, and people don’t want to hear it. These have always mattered andhave been consistently undervalued because you couldn’t put them in a commit. In 2026, they’re what differentiates the people AI amplifies from the people AI replaces.
Finding yourself as a senior in 2026
Many senior developers built their confidence on being good at writing code. On knowing the framework deeply. On being the person in the room who understood what was actually happening under the hood.
That identity still matters. But it’s getting separated from the day-to-day of shipping things, which is a genuinely weird feeling that doesn’t have a clean resolution. Nobody really talks about it.
What seems to work is shifting the identity from “person who writes good code” to “person who produces correct outcomes.” The tool changed. The standard didn’t. You’re still the one responsible for what goes to production, still the one who knows whether a feature is solving the right problem, still the one who catches the architectural decision that will make everyone miserable in eighteen months.
If anything, that responsibility got heavier, not lighter. There’s more code moving faster, with less friction to slow you down when the direction is wrong.
The developers pulling ahead right now aren’t the purists refusing to touch AI. They’re not the ones merging without reading, either. They’re the ones who figured out that the interface between their thinking and the machine’s output is where the actual work lives, and invested accordingly.
Not in the tools. In the thinking.
Which, when you put it that way, has always been the job.

