AI Agents in the Cloud: Navigating a New Shared Responsibility Model

The foundational principles of cloud security are being put to the test by a new class of workload: autonomous AI agents. The excellent analysis on Security Risks and Shared Responsibility in Cloud Migration provides a crucial framework for understanding this challenge. It clearly outlines how the shared responsibility model differs between IaaS and SaaS, and how risks like misconfigurations, insecure APIs, and IAM failures manifest in each. When we introduce AI agents—entities that can act, decide, and invoke APIs on our behalf—these foundational risks are not just amplified; they are fundamentally transformed.

This post examines how organizations must adapt their security strategies for AI agents across cloud models and provides concrete recommendations for safe adoption.

IaaS vs. SaaS: The Agent Security Divide

The article correctly emphasizes that the division of responsibility is radically different between IaaS and SaaS. This dictates your entire approach to securing an AI agent.

In IaaS, You Own the Stack and the Agent. In a model like AWS EC2 or Azure VMs, you are responsible for the operating system, application, data, and network configurations. An AI agent you deploy here is just another application—but one with extraordinary privileges. It may need to read databases, write to storage, or call internal and external APIs. This makes it a prime target. Your responsibility is total: you must secure the agent’s runtime, its access to tools and data, and its underlying host. The provider’s responsibility stops at the hypervisor.

In SaaS, You Govern Access and Data. With a service like Microsoft 365 or a built-in AI feature like “Claude Enterprise,” the provider secures the application and infrastructure. Your focus shifts entirely to data classification, access control, and identity management. The question is not “How do I patch my agent’s OS?” but “What corporate data can this agent access, and who can ask it to act?” You are the steward of the data, not the operator of the infrastructure.

Should You Use AI Agents (Like Claude) in Production?

Yes, but with a phased, risk-based approach. The analysis from 7312.us warns of “complex existing architecture” and “inadequate access controls” as major migration risks. Introducing an agent into a poorly governed environment is asking for trouble. Before production deployment, you must have mature controls in place.

Consider these three deployment postures:

PostureWhen to UseKey Controls Required
LockdownHigh-regulation industries (finance, healthcare); initial experimentationAgents disabled by default; strict tenant restrictions to prevent shadow IT; network-level blocks on personal agent accounts.
ControlledMost enterprises for general productivityAgents enabled but with disabled browser plugins, allowlisted tool integrations (MCP servers), and read-only default permissions. Mandatory user training on prompt injection.
OpenInnovation teams, low-sensitivity data, mature security teamsBrowser use enabled with blocklists; user-installed plugins permitted; relies on heavy real-time monitoring and rapid incident response.

A significant hurdle is the current audit gap. For many enterprise agents, detailed activity logs (every prompt, every tool call) aren’t available via standard compliance APIs. Until audit capabilities mature, avoid using agents for regulated workloads (involving PII, financial transactions, etc.) unless you have verified, immutable logging and can reconstruct events for forensic analysis.

Specific Recommendations for IT Professionals

Drawing from the mitigation strategies in the 7312.us article, here are actionable steps tailored for the agentic era.

For IaaS-Based Agents

  • Sandbox Aggressively: Treat the agent as untrusted code. Use kernel-level isolation (seccomp, cgroups) or lightweight VMs (Firecracker) to contain a potential compromise.
  • Authenticate with Short-Lived Credentials: Never hardcode API keys. Use instance metadata services or secret managers (like HashiCorp Vault or AWS Secrets Manager) to vend just-in-time credentials with automatic rotation.
  • Filter Inputs and Outputs: Deploy a pre-inference prompt guard (e.g., Model Armor, Azure Content Safety, or open-source alternatives) to block prompt injection attempts and prevent leakage of PII or secrets.
  • Log Every Tool Call: Treat each function invocation as a critical security event. Feed these logs to a SIEM and establish baselines to alert on anomalous behavior.

For SaaS-Based Agents

  • Start with the Provider’s Default Secure Settings: Resist the urge to enable all features immediately. Begin with the most restrictive data access policies and expand only as justified by business need.
  • Enforce Tenant Restrictions: Use provider-specific headers (like anthropic-allowed-org-ids) at your proxy layer to prevent employees from using personal, unmanaged agent accounts with corporate data.
  • Apply Least Privilege at the Data Layer: Don’t give the agent access to “all SharePoint.” Use tools like Microsoft Purview to label sensitive data and configure the agent so it can only access data classified as “General” or “Internal,” for example.
  • Conduct Continuous User Training: Teach users that agents can be manipulated. Train them to recognize and report sophisticated prompt injection attacks that try to trick the agent into performing unauthorized actions.

Conclusion

The shared responsibility model remains the essential map for cloud security, but AI agents are a new, unexplored territory on that map. In IaaS, you are the sole defender, responsible for every layer of the agent’s stack. In SaaS, you must become a master of configuration and data governance, as the provider holds the infrastructure keys.

The core principles from 7312.us—least privilege, continuous monitoring, encryption, and a zero-trust mindset—are more critical than ever. By starting with a conservative posture, locking down default permissions, and building robust monitoring and audit capabilities, organizations can harness the power of AI agents without compromising their cloud security.

References

  1. 7312.us. (2026, March 19). Security Risks and Shared Responsibility in Cloud Migration: A Technical Analysis for IT Professionals. Retrieved from https://7312.us/2026/03/19/security-risks-and-shared-responsibility-in-cloud-migration-a-technical-analysis-for-it-professionals/
  2. Anchor Cybersecurity. (2025, April 4). Shared Responsibility in the Cloud: Misconceptions & Risk Scenarios Explained. Retrieved from https://anchorcybersecurity.com/blog/2025-04-04-Shared-Responsibility-Model/
  3. TechTarget. (n.d.). The cloud shared responsibility model for IaaS, PaaS and SaaS. Retrieved from https://www.techtarget.com/searchcloudcomputing/feature/The-cloud-shared-responsibility-model-for-IaaS-PaaS-and-SaaS
  4. NSA. (2024, March 7). Top Ten Cloud Security Mitigation Strategies. Retrieved from https://media.defense.gov/2024/Mar/07/2003407860/-1/-1/0/CSI-CloudTop10-Mitigation-Strategies.PDF
  5. Wiz. (n.d.). Cloud Security Best Practices: 22 Steps for 2026. Retrieved from https://www.wiz.io/academy/cloud-security-best-practices
  6. Aikido. (2025). Cloud Security: The Complete 2025 Guide. Retrieved from https://www.aikido.dev/blog/cloud-security-guide
  7. Security Compass. (n.d.). What Are the Top Security Risks During Cloud Migration? Retrieved from https://www.securitycompass.com/whitepapers/what-are-the-top-security-risks-during-cloud-migration/
  8. Net Results Group. (2025). Security Considerations for Cloud-Based Applications in 2025. Retrieved from https://netresultsgroup.com/security-considerations-for-cloud-based-applications/
  9. Check Point Software. (n.d.). Cloud Migration Risks. Retrieved from https://www.checkpoint.com/cyber-hub/cloud-security/what-is-cloud-migration/cloud-migration-risks/