Every IT team evaluating AI platforms asks the same question in the first meeting: can we control who accesses what? The answer is always some version of "yes." It falls apart under scrutiny.
Ask one more question and the reality surfaces. "Can a finance analyst's AI access only finance systems while a sales rep's AI accesses only CRM, with every action attributed to the specific user?" The answer is usually silence, or a redirect to a roadmap slide.
RBAC for AI is the most requested enterprise feature in the market right now. It is also the most faked.
What fake RBAC looks like
Most AI platforms offer some form of access control. When you look closely, it is workspace-level permissions dressed up as role-based access. Everyone on the workspace shares one connection to Salesforce, one connection to QuickBooks, one set of API keys. The "roles" control which features a user can see in the UI, not which systems the AI can reach on their behalf.
The common patterns we see across the market:
Shared API keys. One key per integration, shared across every user on the workspace. The audit trail shows "the Salesforce connection accessed a record." It does not show which person initiated the request or whether they should have had access to that data.
Feature toggles called "roles." Admin, editor, viewer. These control what the user sees in the interface. They do not control what data the AI can access, which systems it can reach, or what actions it can take. A "viewer" and an "admin" often hit the same underlying API with the same shared credentials.
Workspace-level isolation. Different teams get different workspaces. Better than nothing, but now you are managing five separate environments instead of one governed platform. No cross-team visibility, no unified audit trail, no centralized credential management.
Permission labels with no enforcement. The UI says "Finance" next to a user's name. The underlying system does not actually restrict which APIs or data that user's AI can access. The label is cosmetic.
What most tools call RBAC
Shared API keys per workspace
UI feature toggles
Workspace-level isolation
Permission labels, no enforcement
Audit shows "the connection"
What real RBAC for AI requires
Per-user credential isolation
Per-role system access boundaries
Per-action audit attribution
Revocation stops automations
Audit shows the specific person
What real RBAC for AI requires
AI RBAC is fundamentally different from SaaS RBAC. In a SaaS tool, RBAC controls which buttons you see and which reports you can run. The system does the work. You navigate it.
AI acts on your behalf. It connects to systems, reads data, writes data, and executes tasks. RBAC for AI means controlling what the AI can reach and what it can do, per user, per role, per system. That requires a completely different architecture.
Per-user credential isolation. Each user gets their own set of API keys, provisioned by IT, stored in an isolated vault. The AI connects to Salesforce using that specific user's credentials, not a shared workspace key. The finance analyst's AI accesses finance systems. The sales rep's AI accesses CRM. Nobody crosses boundaries because the credentials enforce the boundaries.
Per-action audit attribution. Every AI action is logged and attributed to the specific user who initiated it. Not "the Salesforce connection accessed a record." "Sarah in finance asked for Q4 revenue by region at 2:14pm on Tuesday, and Orin returned 47 rows from NetSuite using her credentials."
Real revocation. When someone leaves the company, their credential vault is disabled. Their automations stop immediately. Their access is revoked across every connected system in one action. No orphaned API keys, no lingering access, no cleanup project.
Why nobody built it
Per-user credential vaults are architecturally expensive. Every user needs isolated credential storage. Every API call needs credential injection at runtime based on who initiated the request. Every action needs per-user attribution in the audit log. The system needs to handle provisioning, rotation, and revocation at the individual level, not the workspace level.
Shared workspace credentials are dramatically simpler to build. One key per integration, one connection everyone shares, one audit entry that says "the workspace accessed the data." It works for a demo. It works for a small team experimenting. It does not work for a company deploying AI across departments where compliance requires per-user traceability.
Most AI platforms started as small-team tools and grew. Their credential architecture was designed for five people sharing a workspace, not 200 people with different roles, different access needs, and different compliance obligations. Retrofitting per-user isolation into a shared-credential architecture is a rewrite, not a feature.
Orin was built with per-user isolation from day one. The credential vault is the architectural foundation that every other feature is built on. Permissions, audit trails, model governance, and spending controls all trace back to the vault. That is why real RBAC for AI is possible in Orin and not in platforms that started with shared credentials.