In the AI Era, How Will Work Be Divided Again?#
2026-04-12

The boundaries between roles inside companies are getting blurry very quickly. In the past, planners planned, designers designed screens, and developers wrote code. The roles were not perfect, but they were relatively clear. Now AI tools and agent-based workflows are making it common for one person to take on work that used to be split across several roles.
This becomes even more obvious when a company has a good internal harness environment, meaning the tools and rules that let AI work with internal systems in a controlled way. Once that exists, even non-developers can try things like querying data, editing screens, or adding small features much faster than before. Work that once had to pass through several people can now be done by one person working together with AI.
This clearly boosts productivity. At the same time, it also unsettles many people. Developers start asking, “What exactly is my role now?” Designers worry that their work may shrink into merely polishing AI-generated output. Planners face a new question too: “How much of this should I directly do myself?”
I think the heart of this change is not that jobs are simply disappearing. It is that the way roles and responsibilities are divided inside a company is changing.
The first big change is the cost of trying#
In the past, focus was critical for small teams. There were not enough people and building things took real time, so teams had to decide early what to bet on.
Now the situation is different. With AI, it is much easier to build a small feature or MVP quickly, test it, throw it away if it does not work, and try something else. The cost of building has dropped, but more importantly, the cost of trying has dropped too.
That means it can be more rational to test several ideas in parallel instead of committing to only one at the beginning. It is a bit like putting several fishing rods in the water at once, then focusing on the spot where you actually get a bite.
This does not mean focus is gone. It means the timing of focus has moved later.
- In the past, you had to choose before building.
- Now you can build several things quickly and focus after you see a real signal.
This changes how organizations should operate. Early on, they should allow more experiments. In return, they need to become much better at deciding what to continue and what to stop.
Ownership matters more than job titles#
If this trend keeps going, companies may struggle to operate mainly through fixed job categories like planner, designer, and developer. Over time, what may matter more is who is responsible for the goals and outcomes of a clearly bounded area.
When I say owner, I do not mean someone with a fancy title. I mean someone who is responsible for things like:
- what the goal is
- what constraints apply
- how far automation is allowed to go
- what counts as success
- who makes the final call when something goes wrong
So this is not a world where everyone touches everything. It is closer to a world of clear area-based ownership.
Seen this way, the roles of humans and AI also look different.
- Humans move toward defining goals, priorities, constraints, and responsibility.
- AI moves toward doing the actual work.
To put it more bluntly, humans move closer to being decision-makers, while AI moves closer to being workers. For that structure to work well, the boundary of each owner’s responsibility and authority has to be very clear.
Then what will developers and designers do?#
This is the most sensitive question, because in many companies today, non-developers are already using AI to create design drafts and implement features very quickly. And if the harness environment gets strong enough, the output may not be just a draft. It may be close to something you could ship almost as-is.
So there is one uncomfortable fact we probably need to accept. In the future, companies may not need the same number of developers and designers in the same way they did before. Repetitive screen building, familiar feature additions, and simple integration work may shrink.
But that does not automatically mean developers and designers disappear. What is more likely is that the center of their role shifts.
During the transition, a natural pattern looks like this:
- planners or operators directly try design and development work
- designers and developers take responsibility for the quality of what gets produced
- at the same time, they design the standards and systems that reduce the need for manual review later
The key here is not to define review too narrowly. In the future, review may be less about looking at a screen, reading code, and clicking approve.
More important work may include:
- deciding which changes can be auto-applied
- deciding which changes still require human confirmation
- defining how failures are detected quickly
- standardizing design systems and coding rules
- creating guardrails so AI can work safely
In other words, developers and designers may move from “people who directly make a lot of things” to “people who design the environment in which AI can make things safely and consistently.”
The most dangerous mistake is mixing experiment and production#
One point I think matters especially is the separation between experiment and production.
It is a very good idea to give planners or other non-developers an environment where they can test ideas quickly. But problems grow when the results from that environment are pushed into production code all at once. This is especially true on the frontend, where screens, states, and flows are tightly connected. Once the amount of change gets too large, it becomes hard for either humans or AI to review properly.
In that situation, three typical reactions emerge:
- merge it now and fix issues later
- split the changes and review them in smaller pieces
- rebuild it in smaller chunks and submit the changes in smaller batches
My view is fairly clear. “Merge it now and deal with issues later” should not become the default rule.
The reason is simple. That approach does not really increase speed. It mostly pushes review cost and failure cost into the production environment. When too many changes land at once, it also becomes much harder to figure out what caused a problem.
So I think the better principle is:
- if the work can be split, split it and review it in smaller pieces
- if splitting is too expensive, rebuild it in smaller units
- treat experiment output not as “production-ready code by default” but as “a source of validated ideas”
In the AI era, the cost of cleaning up something that is hard to review properly can become more expensive than the cost of implementation itself. That is why experiments should stay free and fast, while moves into production should stay small and deliberate.
What systems should companies build?#
The most dangerous mistake a company can make here is to think, “Now that everyone can build things, we probably do not need role boundaries anymore.”
In reality, the opposite is true. The authority to produce may become broader, but the systems for responsibility and verification need to become more precise.
At minimum, companies need to define:
- who owns which area
- which kinds of work can be auto-applied
- which changes must be approved by a human
- who decides on rollback when something fails
- what actually counts as performance
Performance evaluation also needs to change. In the past, it was often about who built the most. Going forward, these may matter more:
- who made better decisions
- who ran more experiments at lower cost
- who built safer automation systems
- who increased the organization’s speed without sacrificing quality
So companies will need to value not only production itself, but the systems that let production scale safely.
Human work does not disappear. It moves upward.#
In the long run, I think the relationship between humans and AI may become relatively simple. Humans move upward toward deciding what to do, what to allow, and what to continue. AI moves downward toward doing the operational work.
That does not mean human work disappears. It means it becomes more abstract and more fundamental.
- deciding what should be built
- deciding which experiments deserve continued investment
- deciding where responsibility should live
- defining the limits of automation
- judging what level of risk the organization can accept
Seen this way, the roles of developers, designers, and planners are not vanishing so much as being reorganized. The share of direct manual production may shrink, but defining goals, making standards, and allocating responsibility become more important.
Closing#
I do not see this change as merely “AI taking people’s jobs.” A more accurate way to say it is that the location of human work is moving upward.
In the past, what mattered most was who could build with their own hands. Now what matters more is who can ask better questions, define better standards, and design better responsibility structures.
That is why future organizations may become harder to describe only through traditional job categories. Instead, each person may become the owner of a clearly bounded area and use AI to produce results inside that area.
This also makes the company’s task clearer. It should allow more experiments, but move changes to production in smaller units and design a more explicit responsibility structure.
In the end, the question becomes this:
Who can build more in the AI era?
That question still matters.
But the more important question may soon be this:
In the AI era, who is responsible for which outcomes?