AWS is using re:Invent 2025 to make agentic AI the core of its enterprise AI story, anchored by new “frontier agents,” major upgrades to Amazon Bedrock AgentCore, durable workflows on Lambda, and deeper support for MCP-style tool integration and observability. The combined message: AWS wants to be the runtime, control plane, and ecosystem where long‑running, autonomous, enterprise‑safe agents live, not just another place to host models.
Big picture: AWS’ Agentic AI Thesis
AWS leadership is explicitly framing agents as the next platform shift after cloud itself, with Matt Garman saying the majority of future enterprise AI value will come from agents and predicting “billions of agents” embedded in every company. The re:Invent narrative emphasizes moving beyond chatbots and copilots toward autonomous “digital workers” that run for hours or days, learn organizational preferences, and coordinate in swarms across development, security, operations, and customer experience. AWS also ties this to a full‑stack posture—custom silicon, Nova model families, sovereignty options, and an “agent runtime” that aims to remove heavy lifting around tool integration, governance, and operations.
New “frontier agents” for Dev, Security, and Ops
AWS introduced three “frontier agents” as a new class of autonomous, long‑running agents that act as virtual team members rather than simple copilots.
- Kiro autonomous agent: A virtual developer that maintains long‑lived context across sessions, learns from pull requests and feedback, and can independently handle tasks like bug triage, code improvements, and multi‑repo changes over extended periods.
- AWS Security Agent: A virtual security engineer focused on secure design, code review, and penetration‑testing style analysis, designed to run continuously and surface risks across the SDLC.
- AWS DevOps Agent: A virtual SRE/operations teammate that detects, diagnoses, and helps remediate incidents using observability data from tools like CloudWatch, Datadog, New Relic, Dynatrace, Splunk, and others, as well as learning from past incidents to make proactive reliability recommendations.
These frontier agents are positioned as autonomous, scalable, and able to operate without constant human supervision, and they are available in preview with tight integration into AWS tooling and partner ecosystems such as New Relic’s MCP Server for intelligent incident insights.
Amazon Bedrock AgentCore: Policy, Evals, Memory
Amazon is heavily upgrading Bedrock AgentCore as the control plane for building enterprise‑grade agents, with three flagship capabilities announced this week.
- Policy (preview): Natural‑language, fine‑grained policy that intercepts AgentCore Gateway tool calls before execution, acting outside the agent’s reasoning loop to block unauthorized actions deterministically while preserving latency.
- Evaluations: Built‑in evaluators (e.g., correctness, safety, helpfulness) plus custom dimensions allow continuous monitoring of real‑world agent behavior and trigger alerts when quality drifts, reducing guesswork when promoting agents from POC to production.
- Memory (episodic): Structured, episodic memory that logs context, reasoning, actions, and outcomes, with a reflection process that distills reusable learnings so agents can improve over time without bloated prompts.
AWS highlights that organizations ranging from Thomson Reuters and S&P Global to Workday and Swisscom are already using AgentCore to accelerate agents into production, positioning these capabilities as key for regulated and large‑scale deployments.
Workflow and orchestration: Lambda, Bedrock, EKS
AWS is pairing agent platforms with new primitives for multi‑step reasoning, long‑running flows, and orchestration.
- Lambda Durable Functions: New Lambda functionality lets developers implement durable, multi‑step workflows in code that can run for up to a year, pausing for external events or human approvals without paying for idle compute, with dedicated durable execution events and local testing tools.
- Bedrock agent orchestration: Bedrock Agents remain focused on reasoning‑driven decomposition, tool calling, and multi‑agent collaboration, with AgentCore acting as the governance and evaluation layer, and S3 Vectors GA expanding cost‑efficient vector storage for retrieval‑heavy agents.
- EKS and multi‑agent systems: AWS is flagging sessions and guidance on running sophisticated multi‑agent AI systems on Amazon EKS, including secure agent‑to‑agent communication and scale to ultra‑large Kubernetes clusters.
The message for enterprises is that agents can span UI automation (Nova Act), back‑end tools, and event‑driven workflows using a mix of Bedrock, Lambda, Step‑Functions‑like patterns, and containers.
Customer‑facing agents: Amazon Connect and Nova Act
AWS is extending agentic capabilities directly into customer experience and UI automation.
- Amazon Connect agentic AI: Connect now offers AI agents that understand context, reason, and act across voice and messaging, backed by Nova Sonic voice models for more natural, multilingual speech and built‑in observability showing what the agent understood, which tools it used, and why.
- Nova Act GA: Nova Act is now generally available as a service to build reliable UI workflow agents that automate browser tasks like form filling, shopping/booking, data extraction, and QA testing, with AWS claiming >90% reliability for enterprise scenarios.
These launches are pitched as combining deterministic flows with agentic reasoning so enterprises can mix scripted steps with autonomous behavior while retaining control and monitoring.
Security, Governance, Compliance, and Guardrails
Security and governance are central to the agent story, spanning Bedrock, frontier agents, and broader cloud operations.
- AgentCore policy and guardrails: Policy and Guardrails together create layered defenses—Guardrails constrain model outputs and content, while Policy explicitly controls tool and data access, providing a strong story for least‑privilege and separation of concerns.
- AWS Security Agent: Positioned as a persistent security teammate that continuously analyzes architectures and code, surfaces vulnerabilities, and can be used in design and code‑review workflows to enforce secure‑by‑default practices.
- Observability and incident posture: New Relic and AWS integrations let DevOps Agent call an MCP‑based observability server for scoped, intelligent insights and proposed remediation, while broader Cloud Operations talks focus on monitoring distributed AI agents wherever they run.
New and enhanced AWS Support tiers also add AI‑powered support with faster response times and deep integration into agent‑driven DevOps workflows, allowing one‑click escalation with full context when frontier agents encounter issues.
MCP, Open Standards, and Integrations
AWS is explicitly embracing Model Context Protocol (MCP) as a bridge between existing tools and agent runtimes, while still anchoring agents on Bedrock and AWS infrastructure.
- API Gateway MCP proxy: Amazon API Gateway now supports an MCP proxy so existing REST APIs can be exposed as MCP‑compatible endpoints consumable by Bedrock AgentCore Gateway and other MCP clients, including third‑party assistants.
- EKS + MCP workflows: AWS is highlighting MCP integration with EKS to support context‑aware Kubernetes workflows and secure multi‑agent systems, giving dev and platform teams a path from Kubernetes tooling into agent ecosystems.
- Partner MCP servers: New Relic and IBM are both launching MCP‑based components—New Relic’s MCP Server feeds observability into AWS DevOps Agent, and IBM’s ContextForge MCP gateway/registry helps enterprises discover, secure, and manage agentic resources across AWS.
This approach lets AWS participate in the broader MCP ecosystem while keeping Bedrock and AgentCore as the default control planes, signaling a “pragmatic openness” rather than fully ceding runtime control to external standards.
Competitive Positioning vs Google, Microsoft, and Others
AWS is directly contrasting its agent strategy with other hyperscalers and early agent platforms.
- Against Microsoft and Google: Coverage of Garman’s keynote notes references to Gemini and other flagship models, but stresses AWS’ focus on long‑running agents, cloud‑integrated observability, and full‑stack control instead of just model quality. AWS also underscores that while Microsoft is embedding agents deeply into Microsoft 365, AWS wants to be the neutral infrastructure layer and agent runtime across many SaaS ecosystems.
- Against Salesforce and others: Commentary compares AWS’ frontier agents to Salesforce’s Agentforce, suggesting AWS is catching up on narrative but differentiating with depth of cloud integration, custom silicon, and breadth of partner tooling.
- With open standards: By adopting MCP in API Gateway and highlighting MCP‑based partners, AWS is signaling it wants agents built on Bedrock to interoperate with non‑AWS ecosystems, but with AWS services as the primary hosting and governance layer.
Overall, AWS is positioning itself as the safest and most operationally mature place to run agents that directly touch production systems and regulated data, rather than just a place to experiment with LLM prompts.
Ecosystem, Customers, and Early Reactions
Early reactions from partners, analysts, and media emphasize that AWS is “going all in” on agents and making them central to software delivery and operations.
- Enterprise adopters: AWS cites customers like Thomson Reuters, Workday, Swisscom, S&P Global Market Intelligence, and others using AgentCore to bring agents into production with governance and evaluation baked in.
- Observability and DevOps ecosystem: New Relic is tightly coupling its observability platform with DevOps Agent and Quick Suite using MCP, and vendors like Dynatrace, Datadog, and Splunk are showcased as telemetry sources DevOps Agent uses for root cause analysis.
- Industry commentary: Articles and LinkedIn posts frame re:Invent 2025 as “agentic AI’s coming‑out party,” underscoring how Connect, frontier agents, and AgentCore together move AWS from chatbots to autonomous, collaborative digital workers across the stack.
This ecosystem narrative reinforces that agents are not standalone features; they are woven through developer tools, support, observability, and line‑of‑business platforms.
Conclusion
AWS is making a clear bet that agentic AI will be the next big shift in how enterprises build and run software, and the moves at this re:Invent lay the groundwork for that future. With frontier agents, stronger Bedrock AgentCore controls, and deeper links into tools like Lambda, Connect, and observability platforms, AWS is turning agents from simple chat helpers into trusted digital teammates that can own real work. For leaders, the message is simple and easy to act on: start small with focused agents in development, security, operations, or customer service, use the new policy and evaluation tools to keep them safe, and treat agentic AI as a new layer in the stack that will grow over time across the whole business.
About the Author

Rejith Krishnan
Rejith Krishnan is the Founder and CEO of lowtouch.ai, a platform dedicated to empowering enterprises with private, no-code AI agents. With expertise in Site Reliability Engineering (SRE), Kubernetes, and AI systems architecture, he is passionate about simplifying the adoption of AI-driven automation to transform business operations.
Rejith specializes in deploying Large Language Models (LLMs) and building intelligent agents that automate workflows, enhance customer experiences, and optimize IT processes, all while ensuring data privacy and security. His mission is to help businesses unlock the full potential of enterprise AI with seamless, scalable, and secure solutions that fit their unique needs.




