Harness Engineering — A Practical Guide to Safe AI Agent Operations#

2026-04-04

Harness Engineering

Why I Wrote This#

I actively use AI agents (Cursor, Claude Code, etc.) across multiple projects. At first, having an agent write code was impressive enough on its own. But as I integrated them more deeply into real projects, I kept running into recurring problems.

  • Every time I open a new session, the agent forgets the project conventions
  • It repeats the same mistakes today that we already solved yesterday
  • The quality of agent-generated code fluctuates wildly between sessions
  • When managing multiple projects, I have to repeat the same setup for each one

The root cause of these problems wasn’t a lack of agent intelligence — it was that the environment surrounding the agent was not properly set up. As 2026 arrived, this concern spread across the industry and began to be systematized under the name “harness engineering.”

After first applying harness engineering to a company project, I experienced its effectiveness firsthand and decided to apply the same structure to my personal projects. During this process, I felt the need for a “reference document that can quickly turn any project into a harness structure,” which led me to write this guide.

This guide goes beyond explaining the concept of harness engineering — it covers what to apply based on your project type. From personal blogs to multi-agent automation systems, it’s structured to help you design a harness that matches your project’s scale and complexity.


1. What Is Harness Engineering?#

Definition#

Harness engineering is an infrastructure design discipline for operating AI agents safely and reliably. The term “harness” derives from horse tack that controls a horse’s power, referring to a system that guides the powerful but unpredictable force of AI agents in the right direction.

“The model is a commodity. The harness is the moat.” — harness-engineering.ai

Analogy#

ElementAnalogyDescription
HorseAI ModelPowerful and fast, but doesn’t know where to go on its own
HarnessInfrastructureConstraints, guardrails, and feedback loops that let the model’s power be used productively
RiderHuman EngineerDoesn’t run directly — provides direction

Why Harness Engineering Now?#

  • 2025 was the year that proved AI agents can write code
  • 2026 is the year we realized the key isn’t the agent itself, but the harness

LangChain improved their Terminal Bench 2.0 score from 52.8% to 66.5% by changing only the harness, without modifying the model at all. The OpenAI Codex team built production applications exceeding 1 million lines without manually writing a single line of code.

ConceptScopeFocusReliability Improvement
Prompt EngineeringSingle interactionCrafting effective prompts5-15%
Context EngineeringModel context windowOptimizing information the model sees15-30%
Harness EngineeringEntire agent systemEnvironment, constraints, feedback, lifecycle50-80%

2. Core Components of Harness Engineering#

2-1. Context Engineering#

Ensuring the agent has the right information at the right time.

Static Context:

  • Repository-local documents (architecture specs, API specs, style guides)
  • CLAUDE.md files encoding project-specific rules (or AGENTS.md)
  • Interlinked design documents (linters automatically verify link validity and document freshness)

Dynamic Context:

  • Observational data accessible to the agent (logs, metrics, traces)
  • Directory structure mapping at agent startup
  • CI/CD pipeline status and test results

Core rule: From the agent’s perspective, anything not accessible within the context doesn’t exist. Knowledge in Google Docs, Slack threads, or people’s heads is invisible to the system. The repository must be the single source of truth. — OpenAI

2-2. Architectural Constraints#

Instead of telling the agent “write good code,” mechanically enforce what good code looks like.

Dependency Layering:

Types → Config → Repo → Service → Runtime → UI

Each layer can only import from layers to its left, enforced through structural tests and CI verification.

Constraint Enforcement Tools:

  • Deterministic linters — custom rules that automatically flag violations
  • LLM-based auditors — agents that review other agents’ code
  • Structural tests — architecture tests for AI-generated code
  • Pre-commit hooks — automatic checks before code is committed

Paradoxically, constraining the solution space improves agent productivity. When agents can generate anything, they waste tokens exploring dead ends. When the harness defines clear boundaries, agents reach correct solutions faster. — NxCode

2-3. Guardrails#

Technically controlling both inputs and outputs of AI agents to preemptively block behavior outside the designed purpose scope.

  • Input stage: Detect / block prompt injection or confidential information leakage
  • Output stage: Automatically filter harmful content or hallucinations

2-4. Verification Loops#

A structure that verifies the agent’s work at each step before allowing progression. This is the component with the highest ROI in a harness.

# Verification loop pattern (pseudocode)
def run_agent_with_verification(task, tools, cost_ceiling):
    context = assemble_context(task)
    total_cost = 0

    while not task.is_complete():
        action = agent.plan(context, tools)
        result = execute_tool(action)

        verification = verify_output(result, action.expected_schema)
        if not verification.passed:
            if verification.retry_recommended:
                result = retry_with_backoff(action, max_retries=3)
            else:
                return TaskResult(status="failed", reason=verification.reason)

        total_cost += result.tokens_used
        if total_cost > cost_ceiling:
            return TaskResult(status="budget_exceeded", partial=context)

        context = update_context(context, result)

    return TaskResult(status="complete", output=context.final_output)

“Highest ROI” means the greatest effect relative to effort invested. Among all harness components, verification loops deliver the largest quality improvement with the least effort.

  • Implementation cost: 50-150ms additional latency per step, a few dozen lines of code
  • Effect: Task completion rate 83% → 96% (without changing model or prompts)

If you’re building a harness for the first time and wondering “what should I build first?” — start with verification loops for the best cost-effectiveness.

2-5. Cost Envelope Management#

Setting per-task budget ceilings that the harness enforces regardless of the agent’s intent.

  • Cost envelopes are not just financial controls — they’re reliability signals
  • A task hitting its budget ceiling means it’s operating abnormally (bad upstream response, context drift, tool integration error, etc.)

2-6. Entropy Management#

Regularly cleaning up entropy that accumulates in AI-generated codebases over time.

  • Document consistency agent — verifies documents match current code
  • Constraint violation scanner — re-scans code that passed previous checks
  • Pattern enforcement agent — identifies / fixes deviations from established patterns
  • Dependency auditor — tracks / resolves circular or unnecessary dependencies

2-7. Agent Memory#

A structure for persistently accumulating knowledge, discovered patterns, and ongoing decisions that agents learn across sessions within the repository. Since agents are fundamentally stateless, all context disappears when a session ends. The harness compensates for this “amnesia.”

Three time axes of context:

Time AxisNatureExamplesChange Frequency
Static ContextPre-written documentsARCHITECTURE.md, API specsRarely changes
Accumulated ContextKnowledge discovered / learned during workDesign decision records, failure pattern notes, ongoing interestsGrows per session
Dynamic ContextData generated at runtimeLogs, metrics, CI statusChanges every time

Static context is written at project inception, and dynamic context is automatically generated at runtime. Accumulated context sits between them — a knowledge layer that grows incrementally as agent work accumulates.

Implementation pattern for accumulated context:

<project root>/
├── docs/
│   └── decisions/                    # Architecture Decision Records (ADR)
│       ├── 001-static-site-generator-choice.md
│       └── 002-deployment-strategy.md
├── .memory/                          # Agent memory
│   ├── learnings.md                  # Patterns, failure causes, know-how
│   ├── current-focus.md              # Current interests, priorities
│   └── session-notes/                # Per-session summaries (optional)
│       └── 2026-04-04.md

Native memory support by tool:

ToolMemory MechanismStorage LocationAuto / Manual
Claude CodeAuto Memory~/.claude/projects/<project>/Auto-accumulate
Claude CodeCLAUDE.mdProject rootManual
Cursor.cursor/rules/.cursor/rules/*.mdcManual
CursorNotepadsCursor sidebarManual (inject via @notepad)

Claude Code has an Auto Memory feature that automatically saves patterns discovered during sessions, but Cursor requires manual knowledge management through .cursor/rules/ and Notepads. Using repository-based memory (the .memory/ pattern above) that doesn’t depend on any specific tool makes it accessible from any tool.

Core principles:

  • Record failure causes and solutions so the agent “doesn’t repeat the same mistakes”
  • Build a habit of committing “what I learned this session” to the repository before ending the session
  • Accumulated context is also subject to entropy management — regularly clean up outdated or invalidated notes

2-8. Observability & Evaluation#

  • Observability: Structurally tracking what the agent did, why it did it, and what occurred at each step
  • Evaluation: An automated pipeline that continuously measures agent performance against defined criteria

3. Harness Engineering Application Levels#

Harness engineering application depth varies based on project scale and complexity.

Level 1: Basic Harness (Individual Developer)#

  • CLAUDE.md file containing project conventions (Cursor references via .cursor/rules/)
  • Pre-commit hooks for linting and formatting
  • Test suite the agent can run for self-verification
  • Clear directory structure with consistent naming conventions
  • Agent memory (.memory/) for cross-session knowledge accumulation

Suitable for personal blogs, documentation sites, personal tools, and side projects where a single agent is directly instructed by the user.

Level 2: Team Harness (Small Team)#

Level 1 plus:

  • Architectural constraints enforced by CI
  • Shared prompt templates for common tasks
  • Documentation-as-code verified by linters
  • Code review checklists specifically for agent-generated PRs
  • Agent role boundaries and change scope limits

Suitable for environments where multiple developers use agents on the same repository. The focus is on preventing inter-agent conflicts and minimizing code quality variance.

Level 3: Production Harness (Engineering Organization)#

Level 2 plus:

  • Custom middleware layers (loop detection, reasoning optimization)
  • Observability integration (agents read logs and metrics)
  • Entropy management agents running on schedule
  • Harness version control and A/B testing
  • Agent performance monitoring dashboards
  • Escalation policies for agent deadlocks

Suitable for systems where agents autonomously execute pipelines, integrate with external APIs at scale, and where agent judgment errors directly cause financial / operational losses.


4. Applying Harness Engineering to Your Project#

4-1. Service Characteristics Analysis#

Before designing a harness, first identify the target project’s core characteristics through these questions:

Analysis ItemKey Question
Agent StructureSingle agent or multi-agent pipeline?
Data ProcessingWhat data sources are involved? Is parallel processing needed?
Human InvolvementIs Human-in-the-Loop needed? At which stages is approval required?
Automation LevelIs there autonomous scheduling / automatic execution?
External IntegrationWhat external APIs or services are integrated?
Risk LevelWhat impact does an agent’s wrong judgment have? (financial, operational, security)

Based on these answers, classify your project into one of these types to determine which Phases to apply:

Project TypeAgentsAutomationExternal IntegrationRiskRecommended PhasesExamples
A. Personal / StaticSingle (IDE-based)NoneNone or minimalLow1~2Blogs, doc sites, personal tools
B. Team / Web AppSingle or fewCI/CDSome APIsMedium1~3Web apps, SaaS backends
C. Multi-Agent / AutomationMultiple, pipelineSchedule / triggerMany APIsHigh1~5Ad automation, data pipelines
D. Production / EnterpriseMultiple, large-scaleFully automatedLarge-scaleVery high1~7Financial systems, infra automation

4-2. Risk Identification#

Identify risks that can occur when operating without a harness based on project characteristics:

#Risk TypeDescriptionApplicable Types
1Financial malfunctionAgent’s wrong judgment directly leads to monetary lossesC, D
2Hallucination-based decisionsFalse analysis results lead to wrong follow-up actionsAll types
3Cost runawayRepeated external API calls, LLM token accumulation cause cost control failureB, C, D
4Data contaminationData from different contexts mixes during parallel executionC, D
5Agent doom loopsSame task repeatedly detected / processed in multi-agent pipelinesC, D
6Duplicate executionPeriodic triggers cause the same task to execute redundantlyC, D
7Quality driftAgent-generated content / code quality fluctuates between sessionsAll types
8Context lossKnowledge learned between sessions isn’t carried over, repeating the same mistakesAll types

For Type A (personal projects), risks 2, 7, and 8 are most relevant, and Phase 12 harnesses (rule files, pre-commit hooks, agent memory) are sufficient. As you move toward Types CD, risks 1, 3~6 become more critical, requiring Phase 3+ harnesses.

4-3. Harness Engineering Design#

A. Context Engineering Application#

Configure the project repository as the agent’s single source of truth. Recommended directory structure:

<project root>/
├── CLAUDE.md                          # Entry point for agent behavior rules
├── ARCHITECTURE.md                    # Top-level system architecture map
├── .cursor/
│   └── rules/
│       └── general.mdc                # Connects Cursor to reference CLAUDE.md
├── docs/
│   ├── design-docs/                   # Feature design documents
│   ├── api-specs/                     # API endpoint specifications
│   ├── references/                    # SDK/framework references
│   └── quality/                       # Quality standards

Core principles:

  • Maintain CLAUDE.md as a table of contents, not an encyclopedia (about 100 lines)
  • Record all design decisions as documents within the repository (no Slack / Google Docs)
  • From the agent’s perspective, information it can’t search for doesn’t exist

B. Agent Behavior Constraints#

Applicable to: Project Type B and above (Team / Web App). For single-agent personal projects, glob-based rule files and pre-commit hooks are sufficient.

Clearly define agent behavior boundaries based on project characteristics.

1) Role Boundaries per Agent

Explicitly define what each agent can and cannot do:

<Agent Name> (<Role>)
  - Allowed: <list of tasks this agent can perform>
  - Forbidden: <list of tasks this agent must never perform>

Core principles for designing role boundaries:

  • Least privilege: Each agent has only the minimum permissions needed for its role
  • Separation: Separate data query / analysis agents from execution / modification agents
  • Approval required: High-risk actions (financial impact, data changes, external system calls) must go through an approval stage

2) Input / Output Guardrails

  • Input: External data source response validation, context isolation verification
  • Output: Change scope limits, hallucination filtering
  • Pipeline: Inter-stage data schema validation, identifier consistency checks

3) Task Cost Envelopes

Cost ceilings by task type (define per project):
  - Simple query/analysis: baseline × 1
  - Complex analysis/diagnosis: baseline × 2
  - Full pipeline execution: baseline × 10
  - Report generation: baseline × 5
  - Tasks involving external API calls: baseline × 3

  → On ceiling breach: halt task + alert + escalate

C. Verification Loop Application#

Application level: Available for all project types, but implementation depth varies.

  • Type A (Personal): Pre-commit hooks and build validation serve as self-verification.
  • Type B (Team): Integrate automated tests, lints, and PR reviews into CI/CD.
  • Type C~D (Automation / Enterprise): Dual / triple structure of agent self-verification + Human-in-the-Loop is needed.

A dual structure is recommended for verification loops:

1) Agent Self-Verification

Pipeline step execution
  → Agent generates result
  → Self-verify result (data integrity, schema compliance)
  → Auto-retry on failure (max 3 attempts)
  → Pass to next step on success

2) Human Verification (Human-in-the-Loop)

  • High-risk actions request approval via operations tools
  • Operator approves / rejects
  • On rejection, feedback is passed to the agent to suggest alternatives

3) Output Verification Loop

Apply verification loops to agent-generated outputs as well:

  • Agent creates draft → verification (automated or human) → auto-rewrite on rejection

D. Multi-Agent Harness Middleware#

Applicable to: Project Type C and above (Multi-Agent / Automation). Unnecessary for projects where a user manually directs a single agent.

For multi-agent systems, apply middleware to the pipeline:

Pipeline execution request
  → ContextIsolationMiddleware   (context isolation, prevent data contamination)
  → CostEnvelopeMiddleware       (cost envelope check, block on overrun)
  → LoopDetectionMiddleware      (prevent repeated task processing)
  → InputValidationMiddleware    (input data integrity validation)
  → [Agent Execution]
  → OutputValidationMiddleware   (result schema validation, change scope limits)
  → ApprovalGateMiddleware       (risk assessment, route to approval on high risk)
  → ExecutionAuditMiddleware     (execution history recording)

E. Observability Integration#

Applicable to: Project Type C and above. For manual-use projects, agent memory and Git history provide sufficient observability.

Observable DataApplication
Agent execution historyExecution history dashboard, success / failure tracking
Pipeline stage statusBottleneck identification, health monitoring
Approval / rejection ratioAgent judgment accuracy evaluation
External API call volumeAPI quota management, call failure rate tracking
Cost trackingLLM token, external service cost optimization

F. Entropy Management — Maintaining Agent Quality#

Scope: Available for all project types, but scope varies.

  • Type A~B: Periodic cleanup of .memory/learnings.md, link verification is sufficient.
  • Type C~D: Dedicated agents perform regular quality audits.
[Periodic tasks — select/configure per project]
  - Prompt drift detector: Monitor consistency between system prompts and actual output
  - Data integrity checker: Verify referenced data/settings are still valid
  - Output quality audit: Verify auto-generated results are based on actual data
  - Tool function consistency check: Verify tool definitions are in sync with external service schemas

4-4. Harness Engineering Application Roadmap#

You don’t need to apply all Phases sequentially. Based on the project type classification, select and apply only the Phases you need.

PhaseTimingApplication ContentExpected EffectTarget Type
Phase 1ImmediateAgent instruction files, directory structure documentation, pre-commit hooksDevelopment environment qualityAll projects
Phase 2Short-termAgent memory structure (.memory/, decision records)Cross-session continuityAll projects
Phase 3Short-termAgent role boundaries, I/O guardrails, context isolationPipeline stabilityB+
Phase 4Mid-termMiddleware chain, loop detection, cost envelope managementParallel processing safetyC+
Phase 5Mid-termAutomated task verification, duplicate execution prevention, API quota managementAutomation stabilityC+
Phase 6Long-termObservability dashboard, entropy management agent, A/B testingOperations optimizationD
Phase 7Long-termAuto-approval threshold learning, agent benchmarks, harness version controlAutonomous operationsD

Application Examples by Project Type#

Type A — Personal Blog / Documentation Site / Personal Tool

Phase 1: CLAUDE.md (or .cursor/rules/), ARCHITECTURE.md, Pre-commit hooks
Phase 2: .memory/learnings.md, current-focus.md, docs/decisions/
         → These two phases alone complete an environment where the agent
           consistently understands the project and accumulates knowledge
           across sessions.

Type B — Team Web Application / SaaS Backend

Phase 1~2: (Same as Type A)
Phase 3: Architecture lints enforced by CI, PR review checklists,
         change scope limits for agent-generated code
         → Reduces code quality variance between team members
           and prevents excessive agent changes.

Type C — Multi-Agent Automation System

Phase 1~3: (Apply through Type B)
Phase 4: Inter-agent data isolation middleware, cost envelope management,
         doom loop detection and auto-halt
Phase 5: Duplicate execution prevention for scheduled tasks (idempotency keys),
         external API quota management, retry policies on failure
         → Achieves "controlled autonomy."

Type D — Production Enterprise System

Phase 1~5: (Apply through Type C)
Phase 6: Real-time dashboard, automated entropy management agent,
         agent output A/B testing
Phase 7: Auto-approval threshold learning from repeated patterns,
         agent performance benchmark suite, harness config version control and rollback
         → Builds a sustainable operations framework.

5. Agent Instruction File Strategy#

The pattern of “placing agent behavior instructions as markdown in the repository root” has become a de facto standard. However, each tool recognizes different filenames:

ToolDefault Instruction FileAGENTS.md RecognitionCLAUDE.md Recognition
OpenAI CodexAGENTS.mdOfficial supportNot supported
Cursor.cursor/rules/Recognized (lowest priority)Not supported
Claude CodeCLAUDE.mdNot supportedOfficial support

Covering all tools with a single file is currently impossible. (As of April 2026)

Recommended strategy: Main instruction file + tool-specific links

Choose the main instruction file based on your primary AI tool, and have other tools reference it:

Pattern A: Claude Code Main + Cursor Sub

CLAUDE.md                  ← Write actual rules here
.cursor/rules/general.mdc  ← "See CLAUDE.md"

Pattern B: Cursor Main + Claude Code Sub

.cursor/rules/<project>.mdc  ← Write actual rules here (alwaysApply: true)
CLAUDE.md                     ← "See .cursor/rules/"

Choose the AI tool you primarily use as the main one. Write rules in the main tool’s instruction file and keep the sub tool’s instruction file as a lightweight reference to the main. This way, you only manage rules in one place.


6. Agent Memory and Context Drift#

Agent memory and context drift are two sides of the same coin.

Context drift is the phenomenon where the agent deviates from its original goal as conversations grow longer. Harness strategies for addressing this:

  1. Session separation: Use multiple short, purpose-specific sessions rather than one long session. Claude Code’s /compact command or starting a new Cursor chat serve this purpose.
  2. Connect via memory files: Record key conclusions in memory files at session end, and reference them in the next session.
  3. Leverage rule file priority: Rules recorded in .cursor/rules/ (alwaysApply) or CLAUDE.md are injected every turn regardless of session length, so core constraints must be in rule files.
[Context drift in long sessions]
  Session start → (rules + goals clear) → work progresses → ... → (context window saturated)
    → Initial rules' influence ↓ → drift occurs

[Memory-based short session strategy]
  Session 1: Work → record conclusions in memory → end session
  Session 2: Load memory → (rules + previous conclusions clear) → continue work

For developer-managed projects (blogs, personal projects, etc.), using markdown files within the repository (docs/decisions/, .memory/) is more practical than heavy automated memory systems. Simply instructing the agent to “add findings to .memory/learnings.md” is enough to establish cross-session knowledge continuity.


7. Pre-commit Hooks and Agents#

Pre-commit hooks are scripts registered at .git/hooks/pre-commit that automatically run on git commit, blocking the commit itself on rule violations.

The core principle of harness engineering: “What can be mechanically enforced should be mechanically enforced.”

MethodUse CaseEnforcement
Pre-commit hooks (scripts)Formatting, linting, type checks — things with clear rules100% — commit blocked on violation
Natural language instructions (CLAUDE.md, etc.)Design judgments, naming conventions — things requiring judgmentAdvisory level

Indirect communication with agents: Pre-commit hooks can’t directly invoke agents. However, the structure where agents read and respond to hook error messages already works. The OpenAI Codex team leverages this by embedding fix instructions in linter error messages, so the agent reads the error, fixes it, and re-commits.

# Agent-friendly error message example
Error: line 42 - unused variable 'tempData'.
Fix: Remove the variable or use it in the fetchResult() call below.
Refer to docs/conventions/no-unused-vars.md for examples.

8. Key Takeaways#

  1. Harness engineering is the discipline of designing systems (constraints, feedback loops, documentation, lifecycle management) that make AI agents reliable.
  2. Application scope varies by project characteristics. Personal projects only need Phase 1~2, multi-agent automation systems may need through Phase 5, and enterprise systems through Phase 7.
  3. You don’t need a perfect harness from the start — build incrementally by Phase, starting with basic context documents.
  4. Design agent memory as a harness component. Since agents are fundamentally stateless, supplement cross-session learning and knowledge accumulation with repository-based memory.
  5. Build flexible harnesses. As models improve, over-engineering becomes a burden — maintain a rippable structure that’s easy to remove.
  6. If you’re building a harness for the first time and wondering what to start with — start with verification loops for the best ROI.

Real-World Application: Applying Harness to This Blog#

While writing this guide, I simultaneously applied a harness to this blog project (Ted Factory). As a Type A (Personal / Static) project, I only applied Phases 1~2, and here’s what I actually set up:

Phase 1 — Agent Instructions + Verification Automation

  • .cursor/rules/ted-blog-common-rules.mdc: Main Cursor rules (writing style, front matter, content structure)
  • CLAUDE.md: Lightweight reference file for Claude Code
  • ARCHITECTURE.md: Project structure documentation
  • scripts/pre-commit: Hugo build validation, front matter required field validation, ko / en symmetry validation

Phase 2 — Agent Memory

  • .memory/learnings.md: Patterns and know-how discovered during work
  • .memory/current-focus.md: Current interests and priorities
  • docs/decisions/: Architecture Decision Records (ADR)

Changes I noticed after applying these two Phases:

  • When opening a new session, the agent immediately understands project conventions (writing style, front matter structure, deployment method)
  • Pre-commit hooks catch front matter omissions and build failures before commit, eliminating “mistakes discovered after deployment”
  • Thanks to know-how accumulated in .memory/learnings.md, the agent doesn’t repeat the same mistakes

Applying Phase 1~2 to a personal project takes about a day. The stability and consistency improvements you get in return are substantial. If you’re actively using AI agents, I believe this is the first thing worth doing.


Try It Right Now#

You can read through this guide and follow it step by step, but there’s an even simpler way: give this article’s URL to your AI agent and ask it to apply harness engineering to your project.

Agents like Cursor and Claude Code can read URLs, understand the content, and configure a harness for your project accordingly. When I applied a harness to this very blog, I took a similar approach — presenting a harness engineering design document to the agent and asking it to apply the structure.

In Cursor:

Open the project you want to harness in Cursor and type the following in the chat:

https://blog.iamted.kim/en/notes/essays/harness-engineering-guide/
Please apply harness engineering to this project based on the document above.

In Claude Code:

Run Claude Code from the root directory of the project you want to harness, and type:

https://blog.iamted.kim/en/notes/essays/harness-engineering-guide/
Please apply harness engineering to this project based on the document above.

The agent will read this guide, analyze your project’s characteristics, and apply the appropriate Phases for your project type. Of course, you don’t have to accept everything the agent suggests as-is — adjust it to fit your project’s actual situation.


References#

© 2026 Ted Kim. All Rights Reserved. | Email Contact