Operator
Anthropic's term for a company or developer that builds a product or service on top of Claude via the API, as distinct from the end user.
Anthropic uses a three-tier trust model: Anthropic sets the absolute rules, operators customise Claude's behaviour within those rules for their product, and users interact with the resulting application.
An operator is any entity that: - Accesses Claude through the Anthropic API (not through claude.ai directly) - Configures Claude's system prompt, tools, and permissions for their specific use case - Is responsible for how their product deploys Claude to end users
What operators can do: - Expand Claude's defaults (e.g. enable explicit content for appropriate adult platforms) - Restrict Claude's defaults (e.g. limit responses to topics relevant to their product) - Grant users additional permissions up to but not exceeding their own operator permissions - Set a custom persona, name, and instructions
Examples of operators: a legal tech company building a contract review tool, a healthcare company building a patient intake assistant, an education company building a tutoring tool.
Understanding the operator/user distinction is important for anyone building AI products — your system prompt establishes your operator-level configuration, and you control what latitude your end users have within that.
Example
A company builds a customer support chatbot using the Claude API. They are the operator — their system prompt restricts Claude to support topics. Their customers are the users, interacting within those operator-defined boundaries.
Related terms
System Prompt
A special instruction given to an AI model that defines its behavior and personality for all subsequent user interactions.
Agentic AI
An AI system that can break down goals into steps, use tools, and iterate until a goal is achieved.
Safety Filter / Content Filter
A layer that blocks or modifies AI outputs that violate safety policies (hate speech, explicit content, etc.).