Revisiting Ted Factory’s Direction in the Agent Era#

2026-02-04

As a software engineer and an AI engineer, I often think about what I should care about in the long run, what goals I should set, and what kind of mindset I should live with. In times like these—when technology changes quickly—it’s natural for plans to shake. But just because it’s natural doesn’t mean I can brush it off lightly. When my direction shakes, my actions change, and when my actions change, the results change.

Not long ago, a personal-agent concept called OpenClaw appeared, and I watched far more people react to it—explosively—than I expected. I did have the “wow / this is huge” moment, but the thought that came first was different: “New concepts are coming out at an unreal speed.” That speed immediately turned into a question for me. Am I walking in the right direction right now?

The essence of the shift that OpenClaw highlights#

What I took away wasn’t “models got better,” but “multiple elements are converging into a single user experience.” When an always-on execution entity (runtime) / real tool use (browser, files, automation) / persistent memory (keeping context) get combined, the boundary of products changes. It feels less like an era of building one feature at a time, and more like an era of designing an executing entity.

In more blunt terms, the center of gravity shifts from “the user presses a button and a feature runs” to “the user speaks, and an execution entity keeps carrying the next steps forward.” And if that entity begins connecting naturally to the OS / browser / everyday work tools, many features won’t live as individual apps or extensions—they’ll be absorbed into a general-purpose agent layer. So to me, OpenClaw didn’t feel like “one more new technology,” but like a signal that the playing field (layer) is changing.

That’s where my anxiety started. I had thought fairly early on that “as AI advances, there will be more one-person businesses.” The next thought was that it would be important to “automate content production, and build a group of users who consume that content.” That’s why I opened this blog with the Ted Factory concept, and why I’ve been trying to build a production-and-distribution system while writing and making apps.

Then a question hit me. What if the kind of content I’m thinking about becomes meaningless, and the automation system I want to build for Ted Factory also becomes outdated and meaningless? For example, I built an AI-agent Chrome extension called I am your AI. But AI browsers are emerging, and Chrome itself could evolve into an AI-browser form. If that happens, the “extension” form factor could be absorbed—or at least the competitive landscape could shift overnight. Would I am your AI still be chosen by people? That worry felt real.

In a way, this question is the fear of “what I build might disappear,” but more precisely, it’s a check: “Is what I’m accumulating truly an asset?” Content and products are visible outputs, so it’s easy to cling to them. But when the technology layer shifts, outputs can be the first thing to wobble. In the end, I felt that what I should hold onto isn’t the output itself, but the capability to produce it—and the criteria to judge value.

“Should I go deeper into tech / or focus on building assets?”#

This anxiety made it look like a choice. One path is to absorb AI-related technologies as much as possible and aim to become a relatively top-tier AI engineer, technically. The other path is to focus on accumulating valuable assets—whether digital content or something else. I wondered which one deserves more weight.

But the longer I thought about it, the more it felt like these two should be designed as a complementary relationship. Pursuing technology should help asset accumulation, and accumulating assets should speed up technology absorption. If I treat it as an either-or choice, I’ll end up shaken every time the tech landscape shifts.

I want to summarize this as a loop: “monitoring → judgment → application → turning it into assets → feedback.” The key is not understanding the loop in my head, but embedding it into my calendar and workflow. When the loop runs, anxiety goes down. When anxiety goes down, execution goes up. When execution goes up, data accumulates. When data accumulates, judgment gets faster.

Conclusion 1: what matters is not knowledge volume, but “absorb → convert”#

In an era where technologies pour in like a flood, the question “How much do I know?” isn’t very productive. What matters is the ability to quickly reflect new technologies into my system and convert them into value.

So instead of trying to understand every technology perfectly and deeply, I want to more actively practice the following process.

  • Identify the main feature: What does this technology newly make possible?
  • Judge the benefit: Can it create leverage for my system (Ted Factory / product / career)?
  • Apply: Try a small integration, expand if it works, discard quickly if it doesn’t.

The key here isn’t “study harder,” but “reflect faster.” I want to turn that into a habit.

To do that, I want to aim for “small application” rather than “complete understanding.” For example:

  • When I see a new concept, I first look for where it plugs into my current workflow.
  • If there’s no place to apply it immediately, I don’t just save it “for someday.” I push it into a hold list and move on.
  • If there is a place, I attach a minimum version and measure how much time and cost it actually saves.

Technology is ultimately a tool. A tool creates value not by being “known,” but by being “used.” I keep forgetting this obvious sentence, so I’m writing it down intentionally.

I want Ted Factory to be a “playbook factory,” not just a “content factory”#

I keep holding onto Ted Factory not because I simply want to produce a lot of content, but because I want to accumulate reusable standard operating procedures (SOPs) and templates. Posts / apps / automation scripts are outputs. The real asset is the “process that can reproduce those outputs.”

So I want to define Ted Factory as a bundle of the following:

  • Topic sensor: experimental channels (blog, newsletter, app releases, etc.) to detect what has real Needs
  • Production line: templates, checklists, and a minimum-viable way to turn ideas into posts or products
  • Distribution line: channels, summary formats, cadence, and rules for repackaging and reusing
  • Learning line: a way to collect feedback and connect it to the next experiment

Seen this way, even if one specific post doesn’t get much response, the system remains. And if the system remains, then when technology changes, I don’t have to rebuild the whole factory—I just swap out a specific process step.

Conclusion 2: judge value by whether there are consumers (Needs)#

The criterion for judging whether I—or what I made—has value is ultimately simple. Are there people who consume it? In other words, I should move in the direction where Needs exist.

If I want to grow my value as an AI engineer, I should continuously monitor what kinds of AI engineers the labor market is actually looking for, identify the concrete capabilities behind that, and build them. For content, I should build the ability to produce quickly, set hypotheses in different directions, create content to test them, and move toward finding Needs.

From this perspective, questions like “Will this content have historical value?” drop in priority. Instead, I focus more on whether it helps someone right now / gets consumed repeatedly / leads to action.

What I especially want to care about is not “reaction,” but “action.” Likes and views can be signals, but they can also be too light. Actions like saving / subscribing / returning / asking questions / making requests / converting to paid are much stronger. Going forward, I want to judge content and products more by these action signals than by emotional satisfaction.

I am your AI needs to become a “workflow asset,” not just an extension#

If AI browsers arrive, it can feel like extensions are at a disadvantage—because extensions can be absorbed into browser-native features. If that’s true, the answer is simple. I should define I am your AI not as “a bundle of extension features,” but as “workflows that standardize a user’s repeated tasks.”

That means the core competitiveness is not button placement or UI, but things like:

  • Domain-specific playbooks: step-by-step rules that automate what users do often, safely
  • Trust and safety mechanisms: checkpoints and rollback routines that reduce mistakes
  • Logging and learning: accumulating user context while ensuring the user can control it

If the browser evolves into an AI-native form, I should ride that change. Rather than clinging to the “Chrome extension” form factor, it feels better to split what I’ve built into smaller modules and templates—assets that can survive across layers.

Conclusion 3: anxiety is not important; a monitoring routine is#

I felt anxious because I can’t be sure how the world and the market will change, but that anxiety doesn’t really help—and it’s not as important as it feels. The more anxious I get, the less I act. The less I act, the more anxious I get. In the end, I’m the one who loses.

So I reached a simple conclusion. It matters to continuously monitor the world and the market, make an effort to identify valuable directions, and turn that effort into a habit. Rather than trying to predict well, it’s more realistic to maintain a state where I can judge and reflect quickly whenever a change shows up.

More concretely, this means “put monitoring on a schedule.” I want to keep the following as a minimum routine:

  • Tech monitoring: once a week, scan changes in agents / browsers / tooling, and judge whether they plug into my work
  • Market monitoring: look at job posts and project cases to see what skills actually connect to money and real work
  • Decision logging: write down, even briefly, what I’ll adopt and what I’ll drop this month

With this routine, when anxiety rises, I can shift from “I should think more” to “I should look at the monitoring results and take the next action.” I believe what beats anxiety is not more thinking, but more doing.

Final conclusion: even if I can’t know the “right answer,” I can focus on capability#

I can’t know whether the direction I’m taking will be proven right in the end. Cases like OpenClaw and AI browsers will keep appearing, and even bigger changes could become real. Still, what I should focus on is not excessively judging whether the digital content I’m producing “has value,” but strengthening two capabilities:

  • Production / distribution capability: the ability to build content and products quickly, distribute them quickly, and collect feedback
  • Value-judgment capability: the ability to sense Needs, interpret signals, and decide the next direction

If these two accumulate, even if individual pieces of content fail, the system becomes stronger. And when the system becomes stronger, even if technology changes, it’s not “start over from scratch”—it’s “absorb and convert faster.” I want to live as someone who strengthens this loop.

In the end, the mindset I want is simple. Instead of over-judging “Am I on the right path?”, I want to check “Can I build fast, distribute fast, and judge fast?” The right answer will only be known in the future. But the habit of running the loop can be built starting now. I want to move, little by little, toward a direction where I’m less shaken—even in the agent era.