COMFYUI WORKFLOW GUIDE LOCAL + API

Seedance ComfyUI Guide

Set up Seedance video generation inside ComfyUI with custom nodes. Covers node installation, hardware requirements, workflow configuration, and comparison with Dreamina and direct API approaches.

Seedance + ComfyUI in 2026

ComfyUI is the node-based UI for diffusion models. Seedance integration comes in two flavors: API-connected nodes (available now) and potential local inference (pending model release).

API

API-Based Nodes

Custom ComfyUI nodes that call the Seedance API (BytePlus, fal.ai, or Replicate) and return the generated video to your ComfyUI workflow. Works now with Seedance 1.5 Pro and soon with 2.0.

💻

Local Inference (Future)

Once Seedance model weights are publicly released on HuggingFace, local inference nodes will be possible. Currently, Seedance 2.0 weights are not public. See our open source status page.

Hybrid Workflows

Use ComfyUI's strength: chain Seedance generation with local upscaling, frame interpolation, style transfer, and other nodes. Build pipelines that combine API-powered generation with local post-processing.

System Requirements

Requirements differ based on whether you use API nodes (lightweight) or plan for future local inference (GPU-heavy).

Component API Nodes (Current) Local Inference (Future)
GPU Any (inference is remote) 24GB+ VRAM (RTX 4090, A100)
RAM 8GB+ 32GB+ recommended
Storage 2GB (ComfyUI + nodes) 50-100GB (model weights)
Python 3.10+ 3.10+
OS Windows, macOS, Linux Linux recommended (CUDA)
Internet Required (API calls) Only for download

Installing Seedance Nodes in ComfyUI

Two installation methods: via ComfyUI Manager (recommended) or manual git clone.

Method 1: ComfyUI Manager (Recommended)

1

Open ComfyUI in your browser. Click Manager in the top menu.

2

Click Install Custom Nodes. Search for seedance.

3

Click Install on "ComfyUI-Seedance" node pack. Restart ComfyUI when prompted.

Method 2: Manual Git Clone

# Navigate to custom_nodes directory
cd ComfyUI/custom_nodes/

# Clone the Seedance node pack
git clone https://github.com/community/ComfyUI-Seedance.git

# Install Python dependencies
cd ComfyUI-Seedance
pip install -r requirements.txt

# Restart ComfyUI

Configure Your API Key

After installation, add a Seedance API Config node to your workflow. Enter your API key from BytePlus, fal.ai, or Replicate. The node supports all three providers — select your preferred platform from the dropdown.

Security tip: Store your API key in an environment variable (SEEDANCE_API_KEY) rather than hardcoding it in the workflow JSON. The node supports both methods.

Sample Workflow Configurations

Pre-built ComfyUI workflows for common Seedance use cases. Each workflow can be imported directly into ComfyUI.

T2V

Text-to-Video Workflow

The simplest workflow. A text prompt goes into the Seedance T2V node, which returns a video.

[Seedance API Config] → [Seedance T2V]
   → prompt: "your text prompt"
   → model: seedance-2.0
   → duration: 5s
   → resolution: 1280x720
[Seedance T2V] → [Save Video]
I2V

Image-to-Video Workflow

Load an image, add a motion prompt, and generate a video from the static frame.

[Load Image] → [Seedance I2V]
[Seedance API Config] → [Seedance I2V]
   → motion_prompt: "camera pans..."
   → model: seedance-2.0
[Seedance I2V] → [Save Video]

Generate + Upscale Pipeline

Generate with Seedance, then upscale locally using Real-ESRGAN or similar nodes.

[Seedance T2V] → [Video to Frames]
[Video to Frames] → [Upscale (ESRGAN)]
[Upscale (ESRGAN)] → [Frames to Video]
[Frames to Video] → [Save Video]
🎬

Multi-Shot Storyboard

Chain multiple Seedance nodes with different prompts to generate an entire sequence.

[Seedance T2V: "Shot 1"] → [Concat]
[Seedance T2V: "Shot 2"] → [Concat]
[Seedance T2V: "Shot 3"] → [Concat]
[Concat] → [Add Transitions]
[Add Transitions] → [Save Video]

ComfyUI vs Dreamina vs Direct API

Each approach has different strengths. Choose based on your technical skill and workflow needs.

Feature ComfyUI Nodes Dreamina Web Direct API
Ease of Use Moderate Easiest Developer-level
Workflow Chaining Full node graph None Custom scripting
Local Post-Processing Built-in (upscale, interp) None (export first) Custom code
Batch Generation Excellent Limited Excellent (scripted)
Setup Difficulty Medium None (browser) High
Free Tier Access Via API credits Daily credits 2M free tokens (BytePlus)

Common Issues & Fixes

Fixes for the most frequently encountered problems when running Seedance nodes in ComfyUI.

"Seedance node not found" after installation
Ensure ComfyUI was fully restarted (not just browser refresh). Check that the node folder exists in ComfyUI/custom_nodes/ComfyUI-Seedance/. Run pip install -r requirements.txt inside the node folder. Check the ComfyUI console for import errors.
API key authentication failure (401 error)
Verify your API key is correct and has not expired. Check that you selected the right provider (BytePlus, fal.ai, or Replicate) in the config node. Some providers require separate API keys for video vs. image endpoints. Regenerate your key in the provider dashboard if needed.
Generation timeout (video never returns)
Seedance 2.0 generation can take 45-90+ seconds. The default ComfyUI timeout may be too short. In the Seedance node settings, increase timeout_seconds to 180 or higher. Also check your API provider's status page for outages.
Output video is corrupted or zero-length
This usually means the API returned an error that was not properly caught. Check ComfyUI's console output for the full error message. Common causes: insufficient credits, invalid prompt (too short/long), or the model is temporarily unavailable. Retry with a simple prompt to isolate the issue.
Cannot install nodes on macOS / "git not found"
Install Git via Homebrew: brew install git. If using the portable ComfyUI version, ensure Python and Git are in your system PATH. On Apple Silicon Macs, some pip dependencies may need arch -arm64 prefix for correct architecture builds.

ComfyUI Questions

As of February 2026, Seedance 2.0 model weights are not publicly available for download. ComfyUI nodes currently connect to the Seedance API for server-side inference. If ByteDance releases the weights on HuggingFace, local inference nodes will follow. For now, local video generation alternatives include Wan 2.1/2.6 (open source).

For API-based nodes, any GPU that runs ComfyUI is sufficient (inference is remote). If local models become available in the future, expect to need at minimum 24GB VRAM (NVIDIA RTX 4090 or A100). Current open-source video models of similar capability typically require 16-24GB VRAM for 720p generation.

ComfyUI itself is free and open source. The Seedance API costs are the same whether you call them from ComfyUI or from any other client. The advantage of ComfyUI is workflow automation: you can chain generation with upscaling, interpolation, and other post-processing without manual steps, potentially saving time on repetitive tasks.

Yes, this is a popular setup. Run ComfyUI on a cloud GPU instance (RunPod or Vast.ai) and access it via the web UI. The Seedance API nodes work normally since they make outbound API calls regardless of where ComfyUI runs. This setup also gives you access to powerful local processing nodes (upscaling, RIFE interpolation) on the cloud GPU.

Yes. The Seedance I2V node accepts an image input (from a Load Image node or another node's output) plus a text prompt describing the desired motion. Connect your image source to the Seedance I2V node, add a motion prompt, and connect the output to a Save Video node. See the workflow examples above for the node graph layout.

Build Your Video Pipeline

Combine Seedance generation with ComfyUI's node ecosystem for unlimited creative workflows.