Team Workspaces
Each Slack channel becomes an enclave — a private workspace where the team builds, deploys, and manages workflows together. Membership, permissions, and services are all managed through Slack.
Enterprise teams automate repetitive work with AI agents — but the results are fragile, ungoverned, and expensive. Agents redo the same tasks from scratch every time, burning tokens on work that should be done once and run forever. Traditional workflow tools like n8n are designed for humans clicking through GUIs, not agents working through conversation. And when agents do build automation on their own — shell scripts, cron jobs, arbitrary code — there’s no way to control what they access, making it impossible to trust in production.
The result: teams either waste money on repetitive agent work, or they get automation they can’t secure, audit, or hand off.
Tentacular lets teams turn repetitive work into secure, durable, reusable workflows — called tentacles. If you do something once, ask an AI agent to do it. If you’re going to do it again and again, have the agent build a tentacle that runs on its own — on a schedule, on a trigger, or on demand.
Teams work through Slack. A team’s channel becomes their enclave — a private workspace with its own compute, storage, and services. The Kraken, Tentacular’s AI agent, handles everything: building workflows, deploying them, and managing them over time. No one needs to touch a terminal, write infrastructure code, or learn Kubernetes.
Tentacular takes a very different approach to agentic workflows. It is first and foremost structured around allowing teams to build and manage workflows together and to do it naturally. Secondly, it not only wants to be simple for humans but also easy for agents to drive. Thirdly, it wants to make sure that workflows are secure by default. Once is an accident; twice is a coincidence; three times is a pattern. Patterns should be enshrined into trusted workflows. That’s Tentacular.
From workflow systems (n8n, Argo, Tekton): Tentacular is agent-first, not human-first. There’s no GUI to configure and no predefined node library to navigate. Agents build exactly what the team needs through natural language — and each workflow is locked down to only the resources it declares. In testing, the same AI news roundup took hours via n8n but about 5 minutes with Tentacular.
From CI/CD pipelines (Dagger, Tekton, ArgoCD): Tentacles are durable business automations — competitor monitoring, report generation, health checks — not ephemeral build jobs. They persist, run on schedules, respond to events, and are governed by a security contract that limits what they can access.
From AI assistants (OpenClaw): Tentacular is an ideal companion to AI assistants, not a replacement. Assistants provide open-ended flexibility; Tentacular provides hardened, governed, durable workflows. An enterprise AI assistant could restrict workflow creation to tentacles, ensuring compliance and oversight for anything that touches sensitive data.
Team Workspaces
Each Slack channel becomes an enclave — a private workspace where the team builds, deploys, and manages workflows together. Membership, permissions, and services are all managed through Slack.
Security by Design
Every workflow declares exactly what it needs. The platform enforces that declaration at runtime, network, and kernel levels — making prompt injection, data exfiltration, and privilege escalation structurally difficult.
Natural Language, Not YAML
Teams describe what they want in plain English. The Kraken — or any AI agent using the Agent Skill — handles the design, coding, testing, and deployment.
Ready-Made Starting Points
A scaffold library of production-ready templates accelerates common patterns — news digests, health monitors, data pipelines — so agents aren’t starting from scratch.
The process is straightforward:
Here’s a real tentacle — an AI news roundup that fetches from 22 sources, filters, summarizes via LLM, and posts to Slack:
The tentacle declares its triggers (manual + weekly cron), the 4-node DAG (fetch → filter → rank → notify), and every external dependency in its contract (news sources, OpenAI API, Slack webhook). At deploy time, the platform derives NetworkPolicy, Deno permission flags, and secrets validation from the contract. Nothing undeclared is accessible.
The contract is not documentation — it is the enforceable security policy. From a single contract.dependencies block, the system automatically derives:
Zero trust at every layer: runtime (Deno on distroless — no shell, no toolchain), network (default-deny NetworkPolicy), kernel (gVisor syscall interception), and Kubernetes (non-root, read-only filesystem, no SA token, dropped capabilities).
| Category | Feature | What It Means |
|---|---|---|
| Team Collaboration | Enclaves | Each Slack channel is a private workspace with its own compute, storage, and team membership |
| The Kraken | Slack-native AI agent — teams interact through conversation, not terminals | |
| Transitive trust | The agent acts as the user, not as a service account — all actions are attributable | |
| POSIX-style permissions | Owner, member, and other roles with familiar read/write/execute semantics | |
| Security | Contract-driven sandboxing | Workflows can only access what they declare — enforced at runtime, network, and kernel levels |
| gVisor kernel isolation | Syscall interception makes privilege escalation structurally difficult | |
| Read-only containers | No shell, no toolchain, no way to modify code at runtime | |
| Default-deny networking | Each workflow gets only the egress rules its contract requires | |
| Workflows | TypeScript DAG engine | Workflows are code, not config — agents write exactly what the team needs |
| Sidecar containers | Native binaries (ffmpeg, Chromium, ML models) run alongside workflows in the same pod | |
| Shared node modules | Common code is shared across workflow nodes without duplication | |
| Scaffold library | Production-ready templates for common patterns — get started in minutes | |
| Cron and event triggers | Workflows run on schedules, respond to events, or run on demand | |
| Infrastructure | Exoskeleton services | Postgres, S3-compatible storage, and optionally NATS and SPIRE — provisioned automatically per enclave |
| In-cluster MCP server | Authenticated control plane with OAuth/SSO — agents and CLI both use the same API | |
| Git-backed state | Optional git monorepo as system of record for all workflow source, metadata, and encrypted secrets | |
| Local module cache | Supply-chain security with package pinning and air-gap readiness | |
| Multi-arch builds | All components build for both AMD64 and ARM64 |
Why not just have the agent write a shell script? Without a workflow contract, determining what resources are safe to access is impossible. Prompt injection could modify the script to access anything. The contract creates the enforceable security boundary.
Why not use n8n or similar workflow systems? Agents struggle with n8n’s massive configuration surface. In testing, the same AI news roundup took hours via n8n but about 5 minutes with Tentacular. And self-hosted n8n has no equivalent security sandboxing.
Can’t I just run an AI assistant in a hardened sandbox? You can, but hardening restricts data access — which reduces the assistant’s value. If instead the assistant deploys a verified tentacle to a trusted cluster, that tentacle can access sensitive data in a way the raw assistant cannot. The tentacle’s contract makes the access pattern auditable and enforceable.