Set up Seedance video generation inside ComfyUI with custom nodes. Covers node installation, hardware requirements, workflow configuration, and comparison with Dreamina and direct API approaches.
ComfyUI is the node-based UI for diffusion models. Seedance integration comes in two flavors: API-connected nodes (available now) and potential local inference (pending model release).
Custom ComfyUI nodes that call the Seedance API (BytePlus, fal.ai, or Replicate) and return the generated video to your ComfyUI workflow. Works now with Seedance 1.5 Pro and soon with 2.0.
Once Seedance model weights are publicly released on HuggingFace, local inference nodes will be possible. Currently, Seedance 2.0 weights are not public. See our open source status page.
Use ComfyUI's strength: chain Seedance generation with local upscaling, frame interpolation, style transfer, and other nodes. Build pipelines that combine API-powered generation with local post-processing.
Requirements differ based on whether you use API nodes (lightweight) or plan for future local inference (GPU-heavy).
| Component | API Nodes (Current) | Local Inference (Future) |
|---|---|---|
| GPU | Any (inference is remote) | 24GB+ VRAM (RTX 4090, A100) |
| RAM | 8GB+ | 32GB+ recommended |
| Storage | 2GB (ComfyUI + nodes) | 50-100GB (model weights) |
| Python | 3.10+ | 3.10+ |
| OS | Windows, macOS, Linux | Linux recommended (CUDA) |
| Internet | Required (API calls) | Only for download |
Two installation methods: via ComfyUI Manager (recommended) or manual git clone.
Open ComfyUI in your browser. Click Manager in the top menu.
Click Install Custom Nodes. Search for seedance.
Click Install on "ComfyUI-Seedance" node pack. Restart ComfyUI when prompted.
After installation, add a Seedance API Config node to your workflow. Enter your API key from BytePlus, fal.ai, or Replicate. The node supports all three providers — select your preferred platform from the dropdown.
SEEDANCE_API_KEY) rather than hardcoding it in the workflow JSON. The node supports both methods.
Pre-built ComfyUI workflows for common Seedance use cases. Each workflow can be imported directly into ComfyUI.
The simplest workflow. A text prompt goes into the Seedance T2V node, which returns a video.
Load an image, add a motion prompt, and generate a video from the static frame.
Generate with Seedance, then upscale locally using Real-ESRGAN or similar nodes.
Chain multiple Seedance nodes with different prompts to generate an entire sequence.
Each approach has different strengths. Choose based on your technical skill and workflow needs.
| Feature | ComfyUI Nodes | Dreamina Web | Direct API |
|---|---|---|---|
| Ease of Use | Moderate | Easiest | Developer-level |
| Workflow Chaining | Full node graph | None | Custom scripting |
| Local Post-Processing | Built-in (upscale, interp) | None (export first) | Custom code |
| Batch Generation | Excellent | Limited | Excellent (scripted) |
| Setup Difficulty | Medium | None (browser) | High |
| Free Tier Access | Via API credits | Daily credits | 2M free tokens (BytePlus) |
Fixes for the most frequently encountered problems when running Seedance nodes in ComfyUI.
ComfyUI/custom_nodes/ComfyUI-Seedance/. Run pip install -r requirements.txt inside the node folder. Check the ComfyUI console for import errors.timeout_seconds to 180 or higher. Also check your API provider's status page for outages.brew install git. If using the portable ComfyUI version, ensure Python and Git are in your system PATH. On Apple Silicon Macs, some pip dependencies may need arch -arm64 prefix for correct architecture builds.As of February 2026, Seedance 2.0 model weights are not publicly available for download. ComfyUI nodes currently connect to the Seedance API for server-side inference. If ByteDance releases the weights on HuggingFace, local inference nodes will follow. For now, local video generation alternatives include Wan 2.1/2.6 (open source).
For API-based nodes, any GPU that runs ComfyUI is sufficient (inference is remote). If local models become available in the future, expect to need at minimum 24GB VRAM (NVIDIA RTX 4090 or A100). Current open-source video models of similar capability typically require 16-24GB VRAM for 720p generation.
ComfyUI itself is free and open source. The Seedance API costs are the same whether you call them from ComfyUI or from any other client. The advantage of ComfyUI is workflow automation: you can chain generation with upscaling, interpolation, and other post-processing without manual steps, potentially saving time on repetitive tasks.
Yes, this is a popular setup. Run ComfyUI on a cloud GPU instance (RunPod or Vast.ai) and access it via the web UI. The Seedance API nodes work normally since they make outbound API calls regardless of where ComfyUI runs. This setup also gives you access to powerful local processing nodes (upscaling, RIFE interpolation) on the cloud GPU.
Yes. The Seedance I2V node accepts an image input (from a Load Image node or another node's output) plus a text prompt describing the desired motion. Connect your image source to the Seedance I2V node, add a motion prompt, and connect the output to a Save Video node. See the workflow examples above for the node graph layout.
Combine Seedance generation with ComfyUI's node ecosystem for unlimited creative workflows.