Enabling Claude Code Through Enterprise Agent Platform (formerly Vertex AI)
- Shaun Bradridge
- 3 hours ago
- 4 min read
Vertex
Why Your Developers Are Already Using AI and What You Can Do About It
There's a pattern playing out in engineering teams right now and honestly it's happening everywhere. Developers aren't waiting for IT to sign off on an AI coding tool. They're already using browser extensions, personal accounts and consumer AI tools on their work machines. Today. Quietly. No ticket, no policy, no one asked.
So for platform and security teams, the question isn't really whether AI assisted development is happening in your org. It almost certainly is. The real question is whether you have any visibility into it at all.
The Shadow AI Problem
When enterprises don't give developers a proper path to AI tooling, they find their own. That means unknown data residency, no audit trail, inconsistent access controls and real exposure around credentials and proprietary code. Nobody's trying to be malicious. It's just what happens when powerful tools exist and deadlines are real. Blanket prohibition doesn't really work either. The demand doesn't go away. It just goes underground. That's why more and more enterprises are trying a different approach: routing Claude through Google Cloud's Enterprise Agent Platform, bringing frontier model capability inside the governance perimeter instead of trying to lock it out entirely.
What Enterprise Agent Platform Actually Gives You
The whole point of the Enterprise Agent Platform approach is that you're not building a separate governance stack from scratch. You're anchoring AI usage inside the controls you already have.
Identity and access management stays where it is. You can restrict Claude access by team, environment or workload type using the same IAM patterns your platform team already runs. No shared API keys floating around. No vendor specific portals to manage. Every AI action maps back to a real user or workload.
Network and data controls work the same way too. Private Service Connect, VPC Service Controls, Cloud Audit Logs, CMEK — all of it applies to AI traffic just like any other GCP workload. For regulated industries that's not a nice to have, that's the thing that makes it possible to deploy at all.
And billing finally becomes visible. Without central procurement, AI spend gets opaque fast. People spin up experiments, forget to wind them down and suddenly you've got dozens of personal subscriptions compounding across the org. Centralising through GCP means you can allocate costs by project and team, set budgets and actually catch unusual usage before it blows out.
Building the Control Plane
The architecture that works best sits a lightweight internal gateway between your developers and Enterprise Agent Platform. That's where the real leverage is.
The gateway handles authentication, prompt filtering, DLP scanning, rate limiting, logging and model routing all in one place. It becomes the control plane for AI usage across the whole org and it's what makes governance something you can actually scale rather than just chase.
On the developer side, access should come through approved IDE extensions, internal portals, secure CLI tooling or managed CI/CD integrations rather than handing out raw API keys. You want a sanctioned path that's easy enough that people actually use it instead of going around it.

The Controls That Actually Matter
Most teams spend too much time thinking about the model and not enough time thinking about the controls around it. Here's where the actual risk lives.
Data classification is the biggest gap. Not every codebase should be anywhere near an AI tool and treating all repos the same is the most common mistake I see. A simple four tier framework covering public/internal, confidential, regulated and your crown jewel systems with matching access policies gets you a long way.
Secret and credential exposure goes up a lot in AI coding workflows. Developers move fast, context switch constantly and paste things they shouldn't. Pre-prompt sanitisation, output scanning, git hooks and CI validation need to be in place before you scale this out.
Logging doesn't mean storing everything forever but you do need enough to support incident response and detect abuse. User identity, repo context, model used, token volume, timestamps and risk signals at a minimum.
And identity aware access is just the baseline. Short lived credentials, IAM bound service accounts and per team access boundaries. Every AI action should trace back to someone.
How to Roll It Out Without Creating a Mess
The teams that have got this right all started deliberately rather than trying to go wide straight away.
Kick off with a controlled pilot. One engineering team, non sensitive repos, read only assistance, logging on from day one. At this stage you're not chasing productivity numbers. You're watching for failure modes, security observations and how developers actually behave with the tooling.
Phase two is about building the platform. Shared AI gateway, IAM standards, monitoring dashboards, quota management and a proper onboarding flow. Don't scale through exceptions because that creates technical debt that's painful to unwind.
Phase three is where you bring in policy automation. DLP enforcement, repo classification, automated approvals, context aware restrictions and policy as code. The controls need to be automated before you go broad. If your governance depends on developers remembering a checklist it's going to fail.
The Governance Advantage
The long term differentiator isn't going to be who has access to the best model. Everyone will have access to good models. What'll matter is who can operationalise AI safely, satisfy audit requirements, demonstrate impact and keep developer trust at the same time.
The orgs winning right now aren't the ones who moved fastest to buy something. They're the ones treating AI enablement as a platform engineering problem and building internal infrastructure that scales governance alongside adoption.
Done well, enabling Claude through Enterprise Agent Platform is more than just model access. It's a chance to modernise your developer platform, tighten governance and build reusable AI infrastructure that pays off over time.
The teams that get the controls right early will carry a real productivity edge forward without giving up their security posture.
What's been the biggest governance challenge you've hit trying to roll out AI coding tools internally? Keen to hear how other teams are navigating it.
