Skip to content

Tentacular

Agentic Automation for Enterprise Teams — secure by design, token-efficient, and simple to use.

Enterprise teams automate repetitive work with AI agents — but the results are fragile, ungoverned, and expensive. Agents redo the same tasks from scratch every time, burning tokens on work that should be done once and run forever. Traditional workflow tools like n8n are designed for humans clicking through GUIs, not agents working through conversation. And when agents do build automation on their own — shell scripts, cron jobs, arbitrary code — there’s no way to control what they access, making it impossible to trust in production.

The result: teams either waste money on repetitive agent work, or they get automation they can’t secure, audit, or hand off.

Tentacular — Security-First Workflow Engine for Kubernetes

Tentacular lets teams turn repetitive work into secure, durable, reusable workflows — called tentacles. If you do something once, ask an AI agent to do it. If you’re going to do it again and again, have the agent build a tentacle that runs on its own — on a schedule, on a trigger, or on demand.

Teams work through Slack. A team’s channel becomes their enclave — a private workspace with its own compute, storage, and services. The Kraken, Tentacular’s AI agent, handles everything: building workflows, deploying them, and managing them over time. No one needs to touch a terminal, write infrastructure code, or learn Kubernetes.

Tentacular takes a very different approach to agentic workflows. It is first and foremost structured around allowing teams to build and manage workflows together and to do it naturally. Secondly, it not only wants to be simple for humans but also easy for agents to drive. Thirdly, it wants to make sure that workflows are secure by default. Once is an accident; twice is a coincidence; three times is a pattern. Patterns should be enshrined into trusted workflows. That’s Tentacular.

From workflow systems (n8n, Argo, Tekton): Tentacular is agent-first, not human-first. There’s no GUI to configure and no predefined node library to navigate. Agents build exactly what the team needs through natural language — and each workflow is locked down to only the resources it declares. In testing, the same AI news roundup took hours via n8n but about 5 minutes with Tentacular.

From CI/CD pipelines (Dagger, Tekton, ArgoCD): Tentacles are durable business automations — competitor monitoring, report generation, health checks — not ephemeral build jobs. They persist, run on schedules, respond to events, and are governed by a security contract that limits what they can access.

From AI assistants (OpenClaw): Tentacular is an ideal companion to AI assistants, not a replacement. Assistants provide open-ended flexibility; Tentacular provides hardened, governed, durable workflows. An enterprise AI assistant could restrict workflow creation to tentacles, ensuring compliance and oversight for anything that touches sensitive data.

Team Workspaces

Each Slack channel becomes an enclave — a private workspace where the team builds, deploys, and manages workflows together. Membership, permissions, and services are all managed through Slack.

Security by Design

Every workflow declares exactly what it needs. The platform enforces that declaration at runtime, network, and kernel levels — making prompt injection, data exfiltration, and privilege escalation structurally difficult.

Natural Language, Not YAML

Teams describe what they want in plain English. The Kraken — or any AI agent using the Agent Skill — handles the design, coding, testing, and deployment.

Ready-Made Starting Points

A scaffold library of production-ready templates accelerates common patterns — news digests, health monitors, data pipelines — so agents aren’t starting from scratch.

The process is straightforward:

  1. A team creates a Slack channel and invites The Kraken
  2. Someone describes what they need: “Monitor our top 5 competitors’ pricing pages and post a summary every Monday”
  3. The Kraken asks clarifying questions, builds the workflow, tests it, and deploys it
  4. The workflow runs autonomously — on a schedule, on a trigger, or on demand
  5. The team iterates through conversation. No terminals, no code, no infrastructure.

Here’s a real tentacle — an AI news roundup that fetches from 22 sources, filters, summarizes via LLM, and posts to Slack:

Triggers
Contract Dependencies
ManualPOST /run
Cron0 7 * * 1 Mondays 7 AM
fetch-feeds22 sourcesRSS / Reddit / HN API
filter-dedupeDeduplicate & filter7-day window
rank-summarizeLLM AnalysisExecutive summary 500-800w Top 20 ranked by freshness + agentic relevance
notify-slackSlack Block KitPost digest to channel
news-sourcesdynamic-target0.0.0.0/0:443
openai-apiapi.openai.com:443bearer-token
slack-webhookhooks.slack.com:443bearer-token

The tentacle declares its triggers (manual + weekly cron), the 4-node DAG (fetch → filter → rank → notify), and every external dependency in its contract (news sources, OpenAI API, Slack webhook). At deploy time, the platform derives NetworkPolicy, Deno permission flags, and secrets validation from the contract. Nothing undeclared is accessible.

The contract is not documentation — it is the enforceable security policy. From a single contract.dependencies block, the system automatically derives:

  • Deno runtime permissions — locked to declared hosts and ports only
  • Kubernetes NetworkPolicy — default-deny with per-dependency egress rules
  • Secrets validation — every referenced secret must exist before deployment
  • Dynamic targets — CIDR-based rules for runtime-resolved dependencies

Zero trust at every layer: runtime (Deno on distroless — no shell, no toolchain), network (default-deny NetworkPolicy), kernel (gVisor syscall interception), and Kubernetes (non-root, read-only filesystem, no SA token, dropped capabilities).

CategoryFeatureWhat It Means
Team CollaborationEnclavesEach Slack channel is a private workspace with its own compute, storage, and team membership
The KrakenSlack-native AI agent — teams interact through conversation, not terminals
Transitive trustThe agent acts as the user, not as a service account — all actions are attributable
POSIX-style permissionsOwner, member, and other roles with familiar read/write/execute semantics
SecurityContract-driven sandboxingWorkflows can only access what they declare — enforced at runtime, network, and kernel levels
gVisor kernel isolationSyscall interception makes privilege escalation structurally difficult
Read-only containersNo shell, no toolchain, no way to modify code at runtime
Default-deny networkingEach workflow gets only the egress rules its contract requires
WorkflowsTypeScript DAG engineWorkflows are code, not config — agents write exactly what the team needs
Sidecar containersNative binaries (ffmpeg, Chromium, ML models) run alongside workflows in the same pod
Shared node modulesCommon code is shared across workflow nodes without duplication
Scaffold libraryProduction-ready templates for common patterns — get started in minutes
Cron and event triggersWorkflows run on schedules, respond to events, or run on demand
InfrastructureExoskeleton servicesPostgres, S3-compatible storage, and optionally NATS and SPIRE — provisioned automatically per enclave
In-cluster MCP serverAuthenticated control plane with OAuth/SSO — agents and CLI both use the same API
Git-backed stateOptional git monorepo as system of record for all workflow source, metadata, and encrypted secrets
Local module cacheSupply-chain security with package pinning and air-gap readiness
Multi-arch buildsAll components build for both AMD64 and ARM64

Why not just have the agent write a shell script? Without a workflow contract, determining what resources are safe to access is impossible. Prompt injection could modify the script to access anything. The contract creates the enforceable security boundary.

Why not use n8n or similar workflow systems? Agents struggle with n8n’s massive configuration surface. In testing, the same AI news roundup took hours via n8n but about 5 minutes with Tentacular. And self-hosted n8n has no equivalent security sandboxing.

Can’t I just run an AI assistant in a hardened sandbox? You can, but hardening restricts data access — which reduces the assistant’s value. If instead the assistant deploys a verified tentacle to a trusted cluster, that tentacle can access sensitive data in a way the raw assistant cannot. The tentacle’s contract makes the access pattern auditable and enforceable.