<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>News on Ted Factory</title><link>https://blog.iamted.kim/en/news/</link><description>Recent content in News on Ted Factory</description><generator>Hugo</generator><language>en</language><lastBuildDate>Sat, 02 May 2026 10:19:07 +0900</lastBuildDate><atom:link href="https://blog.iamted.kim/en/news/index.xml" rel="self" type="application/rss+xml"/><item><title>AI News</title><link>https://blog.iamted.kim/en/news/ai-news/</link><pubDate>Wed, 29 Apr 2026 00:00:00 +0900</pubDate><guid>https://blog.iamted.kim/en/news/ai-news/</guid><description>&lt;h1 id="ai-news"&gt;AI News&lt;a class="anchor" href="#ai-news"&gt;#&lt;/a&gt;&lt;/h1&gt;
&lt;p&gt;&lt;img src="https://blog.iamted.kim/images/news/ai-news.png" alt="AI News" /&gt;&lt;/p&gt;
&lt;p&gt;This group collects AI technology, product, developer tool, infrastructure, and policy updates that seem worth checking from the author&amp;rsquo;s perspective.&lt;/p&gt;
&lt;p&gt;This page acts as the index for individual AI News briefs. Brief pages are not shown directly in the left sidebar; instead, they are managed in the list below in reverse chronological order.&lt;/p&gt;
&lt;h2 id="what-this-covers"&gt;What This Covers&lt;a class="anchor" href="#what-this-covers"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;AI models, agents, inference, multimodal systems, and on-device AI&lt;/li&gt;
&lt;li&gt;Major announcements from OpenAI, Anthropic, Google DeepMind, Meta AI, Microsoft, NVIDIA, and Hugging Face&lt;/li&gt;
&lt;li&gt;Developer tools such as Cursor, Claude Code, GitHub Copilot, MCP, evaluation tools, and deployment tools&lt;/li&gt;
&lt;li&gt;AI product launches, pricing changes, API updates, and changes that affect real usage&lt;/li&gt;
&lt;li&gt;AI infrastructure trends such as GPUs, inference cost, cloud services, and data centers&lt;/li&gt;
&lt;li&gt;Copyright, regulation, safety, and data usage policy&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="how-to-read"&gt;How To Read&lt;a class="anchor" href="#how-to-read"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Each brief is written to be skimmed in about five minutes.&lt;/li&gt;
&lt;li&gt;When more context is needed, follow the original article or video link inside each item.&lt;/li&gt;
&lt;li&gt;When interpretation matters more than the headline, each brief includes a short note on why it is worth tracking.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="latest-news"&gt;Latest News&lt;a class="anchor" href="#latest-news"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;div class="tf-news-card-list"&gt;
 &lt;a class="tf-news-card" href="https://blog.iamted.kim/en/news/ai-news/20260502-ai-brief/"&gt;
 &lt;div class="tf-news-card__body"&gt;
 &lt;h3 class="tf-news-card__title"&gt;2026-05-02 AI News Brief&lt;/h3&gt;
 &lt;p class="tf-news-card__summary"&gt;Cursor team marketplaces, GitHub Copilot model deprecations, Claude Security, Pentagon AI deals, and an Anthropic MCP video.&lt;/p&gt;</description></item><item><title>2026-04-30 AI News Brief</title><link>https://blog.iamted.kim/en/news/ai-news/20260430-ai-brief/</link><pubDate>Thu, 30 Apr 2026 00:00:00 +0900</pubDate><guid>https://blog.iamted.kim/en/news/ai-news/20260430-ai-brief/</guid><description>&lt;h1 id="2026-04-30-ai-news-brief"&gt;2026-04-30 AI News Brief&lt;a class="anchor" href="#2026-04-30-ai-news-brief"&gt;#&lt;/a&gt;&lt;/h1&gt;
&lt;p&gt;Here is a short summary of AI technology news and videos worth checking today. Since there was no previous brief, this edition uses the last seven days as the default review window.&lt;/p&gt;
&lt;h2 id="quick-summary"&gt;Quick Summary&lt;a class="anchor" href="#quick-summary"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Cursor released a TypeScript SDK for the same agent runtime used across its desktop app, CLI, and web app.&lt;/li&gt;
&lt;li&gt;OpenAI models, Codex, and Managed Agents are coming to Amazon Bedrock, widening the enterprise deployment path.&lt;/li&gt;
&lt;li&gt;OpenAI published Symphony, a spec for orchestrating Codex runs around issue trackers and isolated workspaces.&lt;/li&gt;
&lt;li&gt;NVIDIA introduced Nemotron 3 Nano Omni, an open multimodal model for vision, audio, image, and text reasoning.&lt;/li&gt;
&lt;li&gt;YouTube is testing Ask YouTube, a conversational search experience that blends text answers and video results.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="top-stories"&gt;Top Stories&lt;a class="anchor" href="#top-stories"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;h3 id="cursor-releases-its-sdk"&gt;Cursor Releases Its SDK&lt;a class="anchor" href="#cursor-releases-its-sdk"&gt;#&lt;/a&gt;&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;What happened?&lt;/strong&gt; Cursor released a TypeScript SDK that exposes the agent runtime and models behind its desktop app, CLI, and web app. Developers can install &lt;code&gt;@cursor/sdk&lt;/code&gt;, run agents locally or on Cursor cloud VMs, and stream events into their own workflows.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Why it matters&lt;/strong&gt; Cursor is moving beyond an IDE product toward an agent execution platform. For developer tool builders, this is another signal that the runtime layer for launching, observing, and controlling agents is becoming a product category of its own.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Point to watch&lt;/strong&gt; For Ted Factory-style personal projects, the SDK approach may make it easier to attach task-level agents to repeatable workflows.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Source&lt;/strong&gt;: &lt;a href="https://cursor.com/changelog/sdk-release" target="_blank"&gt;Read the Cursor SDK announcement&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="openai-models-codex-and-managed-agents-come-to-aws"&gt;OpenAI Models, Codex, and Managed Agents Come to AWS&lt;a class="anchor" href="#openai-models-codex-and-managed-agents-come-to-aws"&gt;#&lt;/a&gt;&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;What happened?&lt;/strong&gt; OpenAI and AWS expanded their partnership with OpenAI models, Codex, and Amazon Bedrock Managed Agents powered by OpenAI entering limited preview. AWS customers can use models such as GPT-5.5 and Codex inside Bedrock while relying on AWS security, billing, and governance controls.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Why it matters&lt;/strong&gt; OpenAI agents and models are moving directly into enterprise cloud infrastructure. That gives companies a more familiar path to adoption without building a separate security and procurement model from scratch.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Point to watch&lt;/strong&gt; Codex support through the Bedrock API, starting with CLI, desktop app, and VS Code extension access, shows how quickly coding agents are becoming enterprise deployment targets.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Source&lt;/strong&gt;: &lt;a href="https://openai.com/index/openai-on-aws/" target="_blank"&gt;Read the OpenAI announcement&lt;/a&gt;, &lt;a href="https://aws.amazon.com/about-aws/whats-new/2026/04/bedrock-openai-models-codex-managed-agents/" target="_blank"&gt;Read the AWS announcement&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="openai-publishes-symphony-for-codex-orchestration"&gt;OpenAI Publishes Symphony for Codex Orchestration&lt;a class="anchor" href="#openai-publishes-symphony-for-codex-orchestration"&gt;#&lt;/a&gt;&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;What happened?&lt;/strong&gt; OpenAI published Symphony, an open-source spec for orchestrating Codex runs. The spec describes a long-running service that polls an issue tracker, creates an isolated workspace per issue, and launches a coding-agent session for that issue.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Why it matters&lt;/strong&gt; The coding-agent bottleneck is shifting from “can the model write code?” to “which task should run, in which isolated environment, with what observability and retry behavior?” Symphony treats that operational layer as an explicit system design problem.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Point to watch&lt;/strong&gt; This is closely connected to harness engineering. Agent work is becoming less like a single prompt and more like a system of issues, workspaces, retries, and observable runs.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Source&lt;/strong&gt;: &lt;a href="https://openai.com/index/open-source-codex-orchestration-symphony/" target="_blank"&gt;Read the OpenAI announcement&lt;/a&gt;, &lt;a href="https://github.com/openai/symphony/blob/main/SPEC.md" target="_blank"&gt;Read the Symphony spec&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="nvidia-introduces-nemotron-3-nano-omni"&gt;NVIDIA Introduces Nemotron 3 Nano Omni&lt;a class="anchor" href="#nvidia-introduces-nemotron-3-nano-omni"&gt;#&lt;/a&gt;&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;What happened?&lt;/strong&gt; NVIDIA introduced Nemotron 3 Nano Omni, an open multimodal model that combines vision, audio, image, and text reasoning. NVIDIA says the model reduces latency and cost versus stitching together separate perception models, with up to 9x higher throughput under comparable interactive conditions.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Why it matters&lt;/strong&gt; Agents that work with screens, documents, audio, and video need fast multimodal perception. Nemotron 3 Nano Omni points toward a pattern where efficient perception submodels support larger agent workflows instead of handing every step to a frontier model.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Point to watch&lt;/strong&gt; It is worth tracking as a potential lower-level component for computer-use agents, document intelligence, and audio / video automation.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Source&lt;/strong&gt;: &lt;a href="https://blogs.nvidia.com/blog/nemotron-3-nano-omni-multimodal-ai-agents/" target="_blank"&gt;Read the NVIDIA announcement&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="youtube-tests-ask-youtube"&gt;YouTube Tests Ask YouTube&lt;a class="anchor" href="#youtube-tests-ask-youtube"&gt;#&lt;/a&gt;&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;What happened?&lt;/strong&gt; YouTube is testing Ask YouTube, a conversational search experiment for U.S. Premium subscribers aged 18 or older. The feature returns text summaries, long-form videos, Shorts, and relevant video segments in response to natural-language questions.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Why it matters&lt;/strong&gt; Video search is moving from a list of videos toward a blended answer interface with summaries, evidence, and follow-up questions. That could change both content discovery and creator visibility.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Point to watch&lt;/strong&gt; When using YouTube as a source for future briefs, the important artifact may become not only the video itself but also the AI-generated segments and summaries around it.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Source&lt;/strong&gt;: &lt;a href="https://www.theverge.com/streaming/919441/google-ask-youtube-ai-chatbot-search" target="_blank"&gt;Read The Verge coverage&lt;/a&gt;, &lt;a href="https://techcrunch.com/2026/04/28/youtube-is-testing-an-ai-powered-search-feature-that-shows-guided-answers/" target="_blank"&gt;Read TechCrunch coverage&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="youtube-brief"&gt;YouTube Brief&lt;a class="anchor" href="#youtube-brief"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;h3 id="autoresearch-agent-loops-and-the-future-of-work"&gt;Autoresearch, Agent Loops and the Future of Work&lt;a class="anchor" href="#autoresearch-agent-loops-and-the-future-of-work"&gt;#&lt;/a&gt;&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Channel&lt;/strong&gt;: The AI Daily Brief&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Key idea&lt;/strong&gt; The episode uses Andrej Karpathy&amp;rsquo;s Autoresearch project to explain a loop-based workflow where agents run experiments, keep only improvements, and revert failed attempts. It connects fixed time budgets, single evaluation metrics, rollback behavior, and committed improvements to the future of research and product experimentation.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Why watch&lt;/strong&gt; It is useful for understanding that agent work is becoming less about one-off answers and more about repeatable experiment loops. That connects directly to harnesses, workspace isolation, and evaluation design.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Video&lt;/strong&gt;: &lt;a href="https://www.youtube.com/watch?v=nt9j1k2IhUY" target="_blank"&gt;Watch the video&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</description></item><item><title>2026-05-02 AI News Brief</title><link>https://blog.iamted.kim/en/news/ai-news/20260502-ai-brief/</link><pubDate>Sat, 02 May 2026 00:00:00 +0900</pubDate><guid>https://blog.iamted.kim/en/news/ai-news/20260502-ai-brief/</guid><description>&lt;h1 id="2026-05-02-ai-news-brief"&gt;2026-05-02 AI News Brief&lt;a class="anchor" href="#2026-05-02-ai-news-brief"&gt;#&lt;/a&gt;&lt;/h1&gt;
&lt;p&gt;Here is a short summary of AI technology news and videos worth checking today. This edition focuses on May 1-2 updates after the previous brief, while also including Claude Security&amp;rsquo;s April 30 public beta because it was not covered in the previous brief.&lt;/p&gt;
&lt;h2 id="quick-summary"&gt;Quick Summary&lt;a class="anchor" href="#quick-summary"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Cursor now lets admins create team marketplaces for plugins without first connecting a repository.&lt;/li&gt;
&lt;li&gt;GitHub Copilot will deprecate GPT-5.2 and GPT-5.2-Codex on June 1 and has named replacement models.&lt;/li&gt;
&lt;li&gt;Claude Security is now in public beta for Enterprise customers, offering vulnerability scans and proposed fixes.&lt;/li&gt;
&lt;li&gt;The U.S. Department of Defense expanded AI agreements for classified networks across several major AI providers.&lt;/li&gt;
&lt;li&gt;Anthropic&amp;rsquo;s MCP video explains how the Model Context Protocol works with the Claude API and agent systems.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="top-stories"&gt;Top Stories&lt;a class="anchor" href="#top-stories"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;h3 id="cursor-strengthens-team-marketplace-settings"&gt;Cursor Strengthens Team Marketplace Settings&lt;a class="anchor" href="#cursor-strengthens-team-marketplace-settings"&gt;#&lt;/a&gt;&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;What happened?&lt;/strong&gt; Cursor now lets admins create a team marketplace without connecting a repository first. Team marketplaces can distribute plugins that bundle MCP servers, skills, subagents, rules, and hooks, with each plugin set to Default Off, Default On, or Required.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Why it matters&lt;/strong&gt; Agent tooling is moving from individual preference into team-level operations. For organizations, the question of which tools and permissions agents should receive can now be managed as policy instead of being left to each developer&amp;rsquo;s local setup.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Point to watch&lt;/strong&gt; For harness engineering, plugin bundles, execution permissions, and team defaults are becoming part of the system design.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Source&lt;/strong&gt;: &lt;a href="https://cursor.com/changelog/05-01-26" target="_blank"&gt;Read the Cursor announcement&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="github-copilot-plans-gpt-52-model-deprecations"&gt;GitHub Copilot Plans GPT-5.2 Model Deprecations&lt;a class="anchor" href="#github-copilot-plans-gpt-52-model-deprecations"&gt;#&lt;/a&gt;&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;What happened?&lt;/strong&gt; GitHub announced that GPT-5.2 and GPT-5.2-Codex will be deprecated across Copilot experiences on June 1, 2026. GitHub recommends GPT-5.5 as the replacement for GPT-5.2 and GPT-5.3-Codex as the replacement for GPT-5.2-Codex.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Why it matters&lt;/strong&gt; Coding-agent workflows depend on model choice for quality, cost, speed, and policy. Copilot Enterprise admins in particular need to check model policies and make sure their workflows are not pinned to models that are going away.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Point to watch&lt;/strong&gt; Teams running long-lived agents or automated code review should avoid hardcoding model names into operational workflows.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Source&lt;/strong&gt;: &lt;a href="https://github.blog/changelog/2026-05-01-upcoming-deprecation-of-gpt-5-2-and-gpt-5-2-codex/" target="_blank"&gt;Read the GitHub Changelog&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="claude-security-enters-public-beta"&gt;Claude Security Enters Public Beta&lt;a class="anchor" href="#claude-security-enters-public-beta"&gt;#&lt;/a&gt;&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;What happened?&lt;/strong&gt; Anthropic released Claude Security in public beta for Claude Enterprise customers. Claude Security scans codebases for vulnerabilities, explains severity and reproduction details, proposes patch directions, and can hand off fixes into Claude Code on the Web.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Why it matters&lt;/strong&gt; Security review is expanding from static pattern detection toward agentic analysis that understands code flow and business logic. At the same time, the same capabilities can increase exploitability if misused, so Anthropic also highlights cyber safeguards and its Cyber Verification Program.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Point to watch&lt;/strong&gt; For development teams, the real productivity metric may be the time from scan to a mergeable patch, not just raw finding count.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Source&lt;/strong&gt;: &lt;a href="https://claude.com/blog/claude-security-public-beta" target="_blank"&gt;Read the Claude announcement&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="pentagon-expands-classified-network-ai-deals"&gt;Pentagon Expands Classified-Network AI Deals&lt;a class="anchor" href="#pentagon-expands-classified-network-ai-deals"&gt;#&lt;/a&gt;&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;What happened?&lt;/strong&gt; According to TechCrunch and The Verge, the U.S. Department of Defense signed agreements with NVIDIA, Microsoft, Amazon Web Services, and Reflection AI to deploy their AI technology and models on classified networks for &amp;ldquo;lawful operational use.&amp;rdquo; The reports say the broader set of agreements includes seven companies, including OpenAI, Google, and xAI, while Anthropic remains excluded amid a dispute over safety terms.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Why it matters&lt;/strong&gt; AI models and infrastructure are moving quickly into military and national-security environments. This is a live example of AI company use policies, government procurement, safety guardrails, and cloud security requirements colliding.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Point to watch&lt;/strong&gt; The usable scope of commercial AI tools can change dramatically based on contract language and policy decisions.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Source&lt;/strong&gt;: &lt;a href="https://techcrunch.com/2026/05/01/pentagon-inks-deals-with-nvidia-microsoft-and-aws-to-deploy-ai-on-classified-networks/" target="_blank"&gt;Read TechCrunch coverage&lt;/a&gt;, &lt;a href="https://www.theverge.com/ai-artificial-intelligence/922113/pentagon-ai-classified-openai-google-nvidia" target="_blank"&gt;Read The Verge coverage&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="youtube-brief"&gt;YouTube Brief&lt;a class="anchor" href="#youtube-brief"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;h3 id="building-with-mcp-and-the-claude-api"&gt;Building with MCP and the Claude API&lt;a class="anchor" href="#building-with-mcp-and-the-claude-api"&gt;#&lt;/a&gt;&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Channel&lt;/strong&gt;: Anthropic&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Key idea&lt;/strong&gt; Anthropic&amp;rsquo;s Alex Albert, John Welsh, and Michael Cohen explain the origins of the Model Context Protocol (MCP) and how MCP works with the Claude API. They frame MCP as a universal connector between models and external tools or data sources, then cover remote MCP, registries, the Claude API MCP connector, and tool-design principles.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Why watch&lt;/strong&gt; Agents need more than stronger models to work inside real business systems; they need connection patterns, permissions, and well-described tools. This is a useful overview for readers tracking Claude, Cursor, and other agent runtimes together.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Video&lt;/strong&gt;: &lt;a href="https://www.youtube.com/watch?v=aZLr962R6Ag" target="_blank"&gt;Watch the video&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</description></item></channel></rss>