Building Multi-Tenant Durable Execution with Dynamic Workflows
Introduction
When we launched Workers eight years ago, it was a direct-to-developers platform. Over the years, we expanded the ecosystem so that platforms could enable their customers to ship code through multi-tenant applications. Today, we introduce Dynamic Workflows, bridging durable execution and dynamic deployment. This guide walks you through setting up Dynamic Workflows for your platform, allowing each tenant to have their own workflow code, running in isolated environments with durable execution capabilities.

What You Need
- A Cloudflare account with Workers subscription (paid plan recommended for production)
- Node.js and npm installed
- Wrangler CLI (install via
npm install -g wrangler) - A basic understanding of TypeScript and Cloudflare Workers
- Familiarity with Durable Objects and Workflows (optional but helpful)
Step-by-Step Guide
Step 1: Set Up Dynamic Workers for Compute
Dynamic Workers allow you to inject code at runtime and get an isolated, sandboxed Worker in milliseconds. This is the foundation for per-tenant compute. Use the Dynamic Workers API to upload or reference tenant-specific code. For example:
import { DynamicWorker } from '@cloudflare/dynamic-worker';
const myWorker = new DynamicWorker({
code: tenantCode, // TypeScript string from your tenant
bindings: { storage: ... }
});
const response = await myWorker.fetch(request);
This ensures each tenant runs in its own isolated context, on the same machine, with single-digit millisecond startup.
Step 2: Provision Per-Tenant Storage with Durable Object Facets
Durable Object Facets extend the same idea to storage. Each tenant gets its own SQLite database, spun up on demand. Use facets to isolate tenant data:
import { DurableObjectFacet } from '@cloudflare/durable-object-facet';
const tenantDb = new DurableObjectFacet({
name: `tenant-${tenantId}`,
sql: true // or 'kvs'
});
await tenantDb.sql.execute('CREATE TABLE ...');
The platform acts as a supervisor, managing storage lifecycle.
Step 3: Manage Version Control with Artifacts
Artifacts provide a Git-native, versioned filesystem for each tenant. Create millions of artifacts, one per agent, session, or tenant. Use the Artifacts API to store and retrieve code or data:
import { Artifact } from '@cloudflare/artifacts';
const tenantArtifact = new Artifact({
name: `pipeline-${tenantId}`,
files: { 'workflow.ts': tenantWorkflowCode }
});
await tenantArtifact.commit();
This gives each tenant a dedicated, versioned filesystem for their workflows.
Step 4: Define Dynamic Workflows for Each Tenant
Workflows is our durable execution engine. Instead of binding a single class per deploy, use Dynamic Workflows to load tenant-specific workflow code at runtime. Create a workflow handler that fetches the tenant's code from an Artifact and executes it:
import { Workflow } from '@cloudflare/workflows';
const tenantWorkflow = new Workflow({
code: await artifact.get('workflow.ts'),
steps: {
'step1': async (event) => { ... },
'step2': async (event) => { ... }
}
});
const instance = await tenantWorkflow.start({ input });
The workflow engine turns your run(event, step) function into a program that survives failures, sleeps for hours, waits for events, and resumes exactly where it left off.

Step 5: Wire Everything Together in a Platform Handler
Combine dynamic compute, storage, and workflows in a single request handler. When a tenant action triggers a workflow, look up the tenant's code, create a dynamic worker for pre-processing (optional), then start a Dynamic Workflow instance. Use Durable Object Facets for state and Artifacts for code storage. Example structure:
export default {
async fetch(request, env) {
const tenantId = extractTenant(request);
const artifact = env.ARTIFACTS.get(`pipeline-${tenantId}`);
const workflowCode = await artifact.get('workflow.ts');
// optionally pre-process with dynamic worker
// ...
const workflow = new Workflow({
code: workflowCode,
bindings: { db: env.FACETS.get(`tenant-${tenantId}`) }
});
const instance = await workflow.start({ request: request.url });
return new Response(JSON.stringify({ id: instance.id }));
}
};
Step 6: Handle Workflow Lifecycle and Recovery
Workflows automatically persists execution state. To recover from failures, use the instance.result() method or set up webhooks. For long-running workflows, implement sleep and event wait:
await workflow.sleep('P1D'); // sleep 1 day
const event = await workflow.waitForEvent('payment_received', { timeout: 'PT1H' });
Each step is checkpointed, so if a worker crashes, the workflow resumes from the last completed step.
Tips and Best Practices
- Start small: Test with a single tenant before scaling to millions. Use the open beta to experiment.
- Monitor usage: Enable Cloudflare Observability to track workflow instance counts and latency.
- Secure tenant code: Always sanitize and validate tenant-provided code to prevent malicious injection. Use sandboxing features of Dynamic Workers.
- Optimize storage: Use Durable Object Facets with SQLite for structured data, and Artifacts for large binary files or versioned code.
- Leverage Workflows V2: With up to 50,000 concurrent instances and 300 new instances per second per account, design your platform for high throughput.
- Handle rate limits: Use backoff strategies for workflow starts and API calls to avoid hitting Cloudflare limits.
Related Articles
- German .de Domains Become Unreachable After Flawed DNSSEC Signatures Trigger Widespread Validation Failures
- CSS & Web Platform Q&A: Clip-Path Puzzles, View Transitions, Scoping, and More
- Docker Hardened Images: One Year of Choosing the Tougher Road
- 10 Ways Runpod Flash Revolutionizes AI Development by Cutting Out Containers
- Scaling Sovereign Clouds: Azure Local Expands to Thousands of Nodes
- AWS MCP Server Now Generally Available: Secure AI Agent Access to AWS Services
- Dynamic Workflows: Enabling Durable Execution for Every Tenant
- How to Upgrade Your Container Security with Docker Hardened Images: A Step-by-Step Guide