How to Build Multi-Tenant Durable Workflows with Dynamic Workers
Introduction
When Cloudflare Workers launched, it was a direct-to-developer platform. Today, the ecosystem has grown to support multi-tenant applications where platforms enable their customers to deploy custom code at runtime. From AI-written TypeScript to CI/CD pipelines defined per repository, the need for per-tenant logic is everywhere. This guide walks you through bridging durable execution with dynamic deployment using Cloudflare’s Dynamic Workflows — a solution that gives each tenant its own isolated, durable workflow without requiring you to hardcode a single class per deploy.

What You Need
- A Cloudflare account with access to Workers, Durable Objects, and Workflows (beta features may require early access).
- Familiarity with TypeScript or JavaScript.
- The
wranglerCLI installed and configured. - Understanding of multi‑tenant architecture and isolation requirements.
Step-by-Step Implementation
Step 1: Understand the Gap Between Durable and Dynamic Execution
Traditional Workflows assume your workflow code is part of your deployment. A single wrangler.jsonc binds one class to one workflow. This works when you own all the code, but fails when you need per‑tenant or per‑agent workflows. For example:
- An app platform where AI writes TypeScript for every tenant.
- A CI/CD product where each repo defines its own pipeline.
- An agent SDK where each agent creates its own durable plan.
Dynamic Workflows solve this by letting you hand workflow code to the runtime at runtime, just as Dynamic Workers solved compute and Durable Object Facets solved storage.
Step 2: Set Up Dynamic Workers for Compute
Before tackling workflows, you need a dynamic compute primitive. Use Dynamic Workers (open beta) to spin up isolated, sandboxed Workers on the same machine in single‑digit milliseconds. Each tenant gets its own runtime context. Your platform provides the code — be it generated by AI, submitted by users, or fetched from a repository.
- Decide how you will accept code (e.g., via API, UI, or AI generation).
- Use the Workers runtime API to create a new Worker instance per request or per tenant.
- Ensure isolation: each Worker runs in its own sandbox, preventing cross‑tenant interference.
Step 3: Integrate Durable Object Facets for Per‑Tenant Storage
Each dynamically loaded app needs its own persistent storage. Durable Object Facets extend the dynamic idea to storage: each tenant gets its own SQLite database, created on demand, with the platform acting as a supervisor.
- Map each tenant to a new facet instance.
- Configure storage classes to match tenant workload patterns.
- Use the supervisor pattern to enforce rate limits, quotas, and access controls.
This gives you the same on‑demand, isolated storage that Dynamic Workers give for compute.
Step 4: Add Versioned Source Control with Artifacts
For workflows that involve code or configuration evolution, you need a Git‑native filesystem. Artifacts provide a versioned filesystem that you can create by the tens of millions — one per agent, session, or tenant.
- Use the Artifacts API to create a new filesystem for each d tenant or agent session.
- Store workflow definitions, scripts, and configuration in the artifact.
- Enable versioning so you can roll back or diff changes over time.
Step 5: Bridge Durable Execution with Dynamic Workflows
Now you can combine all three primitives. Dynamic Workflows bring durable execution to dynamic deployments. Instead of a static class, your platform injects the workflow code at runtime. Each tenant, agent, or pipeline gets its own workflow instance that can run for hours, sleep, wait for external events, and resume exactly where it left off.

- Define a workflow template that accepts tenant‑specific code (e.g., a
run(event, step)function). - When a tenant triggers a workflow, create a new Worker using Dynamic Workers, attach a storage facet, and bind the workflow runtime to the provided code.
- Use the Workflows V2 engine, which supports up to 50,000 concurrent instances and 300 new instances per second per account — designed for the agentic era.
For example, a CI/CD platform can let each repository define its own pipeline as a workflow. The platform deploys a Dynamic Worker per repository, which runs the pipeline steps durably.
Step 6: Test, Monitor, and Scale
After implementation, ensure your system is robust:
- Test tenant isolation: verify that one tenant’s workflow cannot access another’s data or runtime.
- Monitor workflow execution using Cloudflare’s observability tools. Track instance counts, failure rates, and latency.
- Scale gradually: start with a few tenants and increase as you validate performance.
- Use the supervisor pattern (from Durable Object Facets) to manage resource quotas and prevent abuse.
Tips
- Start with a single use case. Whether it’s AI‑generated agents or CI/CD pipelines, focus on one and then extend.
- Leverage the cloudflare‑worker ecosystem. Many existing packages for queues, R2, and AI work seamlessly with Dynamic Workers.
- Design for failure. Workflows are built to survive — test how your system behaves when a tenant’s code throws an unhandled error.
- Use versioned artifacts to roll back tenant workflows if a new version introduces bugs.
- Consider security at every layer: input validation, rate limiting, and resource isolation are non‑negotiable in multi‑tenant platforms.
By following these steps, you can offer durable, dynamic workflows to your customers — exactly what platforms building the next generation of SaaS, AI agents, and CI/CD tools need.
Related Articles
- The PCPJack Worm: A Dual-Purpose Threat Cleansing and Credential Theft in Cloud Environments
- Redesigning Your Organization for the Agentic AI Era: A Step-by-Step Guide to Empathetic Workforce Restructuring
- 5 Essential Facts About AWS Interconnect’s New Managed Multicloud and Last-Mile Connectivity
- Azure Local Now Powers Massive Sovereign Private Cloud Deployments
- Inside the Pentagon's $17.9 Billion Golden Dome Laser Defense Program
- How to Defeat Controller Staleness in Kubernetes v1.36 with AtomicFIFO and Better Observability
- Microsoft Expands Azure Local to Support Thousands of Nodes in Sovereign Private Cloud Deployments
- 6 Key Insights Into AWS's New Agentic Payment Capabilities for Bedrock AgentCore