Skip to content

MCP Server Setup

The MCP (Model Context Protocol) server is the control plane for Tentacular. It runs inside your Kubernetes cluster and provides a secure API that the tntc CLI and AI agents use to manage tentacles. The CLI has no direct Kubernetes API access — all cluster operations route through this server.

  • Deploys tentacles — applies workflow manifests (Deployment, Service, ConfigMap, NetworkPolicy, Secret)
  • Manages namespaces — creates namespaces with Pod Security Admission labels and default-deny NetworkPolicy
  • Runs tentacles — triggers execution via HTTP POST to workflow /run endpoints
  • Schedules cron — internal cron scheduler reads deployment annotations and fires triggers
  • Health monitoring — queries workflow health endpoints, classifies as Green/Amber/Red
  • Security audit — validates RBAC, NetworkPolicy, and PSA for deployed tentacles
  • Exoskeleton provisioning — runs registrars for Postgres, NATS, RustFS when enabled
  • Cluster profiling — generates capability snapshots for agent-informed design
  • Kubernetes cluster (any distribution: EKS, GKE, AKS, k0s, k3s, kind)
  • kubectl configured with cluster access
  • Helm 3+
Terminal window
# Clone the MCP server repo
git clone git@github.com:randybias/tentacular-mcp.git
# Install the MCP server (token auto-generated if not provided)
kubectl create namespace tentacular-support
helm install tentacular-mcp ./tentacular-mcp/charts/tentacular-mcp \
--namespace tentacular-system --create-namespace

Note: If auth.token is not specified, the Helm chart auto-generates a secure 64-character token. The token is preserved across helm upgrade. To provide your own token, add --set auth.token="$(openssl rand -hex 32)". To retrieve the auto-generated token:

Terminal window
kubectl get secret -n tentacular-system tentacular-mcp-auth -o jsonpath='{.data.token}' | base64 -d
Terminal window
# Check the pod is running
kubectl get pods -n tentacular-system
# Check the service
kubectl get svc -n tentacular-system

The MCP server exposes its API on port 8080. Access depends on your cluster setup:

  • NodePort: http://<node-ip>:30080/mcp (default)
  • LoadBalancer: http://<lb-ip>:8080/mcp
  • Port-forward: kubectl port-forward -n tentacular-system svc/tentacular-mcp 8080:8080

Add the MCP endpoint to your CLI configuration. If using OIDC, mcp_token_path is optional (used as admin fallback). If using bearer-token only, point it at a file containing the token:

~/.tentacular/config.yaml
environments:
dev:
mcp_endpoint: http://<node-ip>:30080/mcp
mcp_token_path: ~/.tentacular/mcp-token # optional with OIDC
Terminal window
tntc cluster check --env dev

The MCP server supports three authentication paths:

MethodClientDeployer ProvenanceFlow
CLI OIDCtntc CLIYes — records who deployedDevice-code grant (browser)
Claude Code OAuthClaude CodeYes — records who deployedAuthorization-code + PKCE (browser)
Bearer tokensAny HTTP clientNo — anonymous deploysStatic token file

All OIDC-authenticated requests carry deployer identity (email, subject, display name) which is recorded as ownership annotations on namespaces and tentacles. Bearer-token requests have no identity and bypass authorization.

For deployer provenance and SSO, configure OIDC via the Helm chart. When using the tentacular-platform chart, the Keycloak realm and client are created automatically via --import-realm — no manual Keycloak configuration is needed.

Terminal window
helm upgrade tentacular-mcp oci://ghcr.io/randybias/tentacular-mcp \
--namespace tentacular-system \
--set auth.bearerToken="$MCP_TOKEN" \
--set exoskeletonAuth.enabled=true \
--set exoskeletonAuth.existingSecret=tentacular-mcp-exoskeleton-auth \
--set externalURL=https://mcp.example.com

The externalURL value enables RFC 9728 Protected Resource Metadata — the server advertises its authorization server at /.well-known/oauth-protected-resource, allowing OAuth clients to auto-discover how to authenticate. When externalURL is not set, clients must be configured with the auth server URL manually.

The default Keycloak realm configuration sets access token lifetime to 12 hours, SSO session idle timeout to 12 hours, and SSO session max lifetime to 24 hours. This allows agents to operate for extended sessions without requiring human re-authentication via the device flow.

Then configure the CLI environment:

environments:
prod:
mcp_endpoint: http://prod-mcp:30080/mcp
oidc_issuer: https://keycloak.example.com/realms/tentacular
oidc_client_id: tentacular-mcp
oidc_client_secret: your-secret
Terminal window
tntc login --env prod
tntc whoami --env prod

Claude Code connects to the MCP server as a remote HTTP MCP server and authenticates via OAuth 2.0. When externalURL is configured (see above), auth discovery is automatic. Configure .mcp.json in your workspace:

{
"mcpServers": {
"tentacular-mcp": {
"type": "http",
"url": "http://<mcp-endpoint>/mcp",
"oauth": { "clientId": "tentacular-mcp" }
}
}
}

On first connection, Claude Code:

  1. Fetches /.well-known/oauth-protected-resource from the MCP server
  2. Discovers the Keycloak authorization server URL
  3. Opens a browser for Keycloak login
  4. Stores the resulting tokens in the macOS system keychain
  5. Attaches the JWT to all subsequent MCP requests

The JWT carries the same OIDC identity as tntc login — namespaces and tentacles created via Claude Code have proper ownership annotations.

If externalURL is not configured, add authServerMetadataUrl to the OAuth config as a manual fallback:

{
"mcpServers": {
"tentacular-mcp": {
"type": "http",
"url": "http://<mcp-endpoint>/mcp",
"oauth": {
"clientId": "tentacular-mcp",
"authServerMetadataUrl": "https://keycloak.example.com/realms/tentacular/.well-known/openid-configuration"
}
}
}
}

Note: The Keycloak client must have all scopes that Claude Code requests as optional client scopes. Claude Code requests all scopes listed in the Keycloak discovery document. If scopes are missing, you will see an invalid_scope error during authentication.

After deploying with externalURL, verify the discovery endpoint:

Terminal window
# Check the well-known endpoint returns correct metadata
curl -s http://<mcp-endpoint>/.well-known/oauth-protected-resource | jq .
# Check that 401 responses include WWW-Authenticate header
curl -sv http://<mcp-endpoint>/mcp 2>&1 | grep WWW-Authenticate

Expected metadata response:

{
"resource": "https://mcp.example.com/mcp",
"authorization_servers": ["https://keycloak.example.com/realms/tentacular"],
"scopes_supported": ["openid", "email", "profile"],
"bearer_methods_supported": ["header"],
"resource_name": "Tentacular MCP Server"
}

The MCP server exposes tools via the Model Context Protocol (JSON-RPC 2.0 over Streamable HTTP). These tools are organized into functional groups:

GroupToolsPurpose
Enclaveenclave_provision, enclave_info, enclave_list, enclave_sync, enclave_deprovisionEnclave (tenant workspace) lifecycle
Workflowwf_apply, wf_remove, wf_list, wf_status, wf_run, etc.Tentacle lifecycle
Workflow Healthwf_health, wf_health_nsPer-tentacle health
Healthhealth_cluster_summary, health_nodes, health_ns_usageCluster monitoring
Auditaudit_rbac, audit_netpol, audit_psaSecurity validation
Clustercluster_preflight, cluster_profileCluster capabilities + exoskeleton service discovery
Module Proxyproxy_statusESM proxy

See MCP Tools Reference for the complete list.

The MCP server includes an internal cron scheduler (robfig/cron/v3). When a tentacle is deployed with cron triggers, the schedule is stored as a tentacular.io/cron-schedule annotation on the Deployment. The scheduler reads these annotations and fires HTTP POST to the tentacle’s /run endpoint on schedule. No CronJob resources are created.

The MCP server operates with scoped RBAC — it can only manage tentacular-related resources (Deployments, Services, ConfigMaps, Secrets, NetworkPolicies, Namespaces with specific labels). It cannot access resources outside its scope.

When OIDC authentication is configured, the MCP server enforces POSIX-like permissions on workflow operations. Authorization is enabled by default. The default mode for new deployments is group-read (rwxr-x---): owner has full access, group members can read and execute.

To disable authorization entirely (kill switch), set the environment variable on the MCP server:

Terminal window
TENTACULAR_AUTHZ_ENABLED=false

Bearer-token requests always bypass authorization regardless of this setting — permissions are only evaluated for OIDC-authenticated requests.

Note: The tentacular.dev/* annotation namespace has been replaced by tentacular.io/*. Existing deployments using old annotations will not have authorization enforced until redeployed.

See the Authorization guide for the full permission model documentation.

  • kubectl get pods -n tentacular-system shows the MCP pod running
  • tntc cluster check passes all checks
  • tntc list returns successfully (may show empty list)
  • tntc cluster profile --save generates a cluster profile
SymptomCauseFix
Pod not startingMissing RBAC or configurationCheck kubectl logs -n tentacular-system
connection refusedWrong endpoint URL or pod not readyVerify service and endpoint URL
401 UnauthorizedToken mismatchEnsure CLI token matches Helm auth.bearerToken
Cron triggers not firingAnnotation missingCheck tentacular.io/cron-schedule annotation on Deployment
OIDC errorsWrong issuer or client configVerify OIDC settings match your identity provider
invalid_scope during Claude Code authKeycloak client missing scopesClaude Code requests all scopes from discovery; add missing ones as optional client scopes
Claude Code Authentication ErrorNo auth discovery endpointSet externalURL in Helm values, or add authServerMetadataUrl to .mcp.json
resource has no ownerResource created via bearer-tokenBearer-token creates ownerless resources; re-create with OIDC identity
permission denied on workflow operationsAuthz mode too restrictiveCheck enclave permissions via enclave_info. Or set TENTACULAR_AUTHZ_ENABLED=false to disable