Run Your Own Private Image Generator: A Step-by-Step Guide to Docker Model Runner & Open WebUI
Introduction
Imagine needing a few custom images for a presentation or a personal project—you open your browser, log into an AI image service, and then wonder: Where do my prompts go? How many credits did that cost? And why did a perfectly reasonable request for a dragon in a business suit get flagged by an overzealous filter? What if you could sidestep all of that and run the whole operation on your own computer, with a clean chat interface on top?

That’s exactly what Docker Model Runner now makes possible. With just a handful of commands, you can download an image‑generation model, connect it to Open WebUI, and start producing images right from a chat window—all completely local, private, and under your control.
In this guide, you’ll build your very own private image generator. No cloud subscription, no data leaving your machine, and no arbitrary filters. Let’s get started.
What You Need
- Docker Desktop (on macOS) or Docker Engine (on Linux) – both are free.
- About 8 GB of free RAM for a small model (more RAM helps with larger models and faster generation).
- A GPU (optional but highly recommended): NVIDIA with CUDA, Apple Silicon with MPS, or just a CPU fallback.
If you can run docker model version without errors, you’re ready to proceed.
How Docker Model Runner Works with Open WebUI
Before diving into the steps, here’s the big picture:
Docker Model Runner acts as the control plane for your models. It downloads them, manages the inference backend lifecycle, and exposes a 100% OpenAI‑compatible API—including the all‑important POST /v1/images/generations endpoint. Open WebUI already knows how to talk to that endpoint, so by connecting the two, you get a chat interface that can generate images on demand.
Step 1: Pull an Image Generation Model
Docker Model Runner uses a compact packaging format called DDUF (Diffusers Unified Format) to distribute image‑generation models through Docker Hub—just like any other OCI artifact.
Open your terminal and pull a model:
docker model pull stable-diffusion
This command downloads the latest stable diffusion model (about 7 GB). You can verify that it’s ready with:
docker model inspect stable-diffusion
You’ll see output like this (truncated for clarity):
{
"id": "sha256:5f60862074a4c585126288d08555e5ad9ef65044bf490ff3a64855fc84d06823",
"tags": ["docker.io/ai/stable-diffusion:latest"],
"config": {
"format": "diffusers",
"size": "6.94GB"
}
}
What’s happening under the hood? The model is stored locally as a DDUF file—a single‑file format that bundles everything a diffusion model needs: text encoder, VAE, UNet/DiT, and scheduler configuration. Docker Model Runner knows how to unpack this at runtime so you don’t have to worry about the internals.
Step 2: Launch Open WebUI
Here’s the neat part. Docker Model Runner includes a built‑in launch command that automatically wires up Open WebUI against your local inference endpoint:
docker model launch openwebui
That single command does the following:
- Starts Open WebUI (if not already running) as a Docker container.
- Connects it to the model’s API endpoint (running on
localhost). - Makes the chat interface available at
http://localhost:8080(or a similar port).
After a few seconds, open your browser and navigate to the displayed URL. You’ll see the familiar Open WebUI chat interface, but now with the ability to generate images.

Step 3: Generate Your First Image
Inside the chat window, simply type a prompt that describes the image you want. For example:
“A dragon wearing a business suit, digital art, vibrant colors”
Open WebUI will send your prompt to the local model, which will generate an image and display it back in the chat. You can iterate, tweak prompts, and save the results—all without any internet connection or cloud dependency.
Note: The first generation may take a little longer while the model loads into memory. Subsequent generations are usually faster.
Step 4: (Optional) Switch Models or Customize
Docker Model Runner supports pulling other DDUF‑packaged models. For instance, you can pull a different variant:
docker model pull some-other-model
Then restart Open WebUI with the new model (or use the API’s model selection feature if available). You can also tweak inference parameters by editing the configuration files inside the Docker container—though for most users the defaults work well.
Tips for a Smooth Experience
- Use a GPU if possible: Generating images on CPU is doable but slow. An NVIDIA GPU with CUDA or an Apple Silicon Mac (MPS) will cut generation time from minutes to seconds.
- Monitor RAM usage: Image generation models are memory‑hungry. Close other heavy applications to free up RAM, and consider adding swap space if you’re low.
- Keep Docker up to date: Newer versions of Docker Model Runner include performance improvements and bug fixes. Run
docker model versionto check, and update Docker Desktop regularly. - Save your favorite prompts: Open WebUI keeps a conversation history, but you can also export your favorite images directly from the interface.
- Explore other models: The DDUF format opens up many possibilities. Check Docker Hub for community‑uploaded models that might suit your style.
- Troubleshooting tip: If the launch command fails, ensure your Docker daemon is running and that you have enough disk space (the model needs ~7 GB). Also verify that no other service is using port 8080.
You now have a fully private, locally‑run image generator that puts you in complete control. No credits, no filters, no privacy worries—just your prompts and your machine. Happy generating!
Related Articles
- Azure Local Expands Sovereign Private Cloud Deployments to Thousands of Nodes
- How to Implement Managed Daemons for Amazon ECS Managed Instances
- AWS MCP Server General Availability: Secure Agent Access to AWS Services
- Run Your Own Private AI Image Generator with Docker and Open WebUI
- 5 Game-Changing AWS Updates from Late April 2026
- Mastering Top announcements of the What’s Next with AWS, 2026
- 8 Essential Insights into Microsoft’s Sovereign Private Cloud Scaling with Azure Local
- Kubernetes v1.36 Launches with Breakthrough Staleness Fixes for Controllers – Urgent Update for Cluster Stability