The complete step-by-step beginner's guide to Seedance 2.0. Learn how to create stunning AI videos from text, images, and audio references — on every platform available.
Seedance 2.0 is ByteDance's latest AI video generation model, representing a significant leap in multimodal video synthesis. Released in early 2026, it supports text-to-video, image-to-video, and multi-reference generation with industry-leading character consistency and physics simulation.
Accept up to 12 reference files simultaneously — images, audio clips, and text prompts can all be combined in a single generation request. The @tag system lets you bind each reference to a specific element.
Native multi-shot support maintains character identity across scenes. Upload a reference face or body, tag it with @character, and reuse it across different prompts for consistent storytelling.
Unique beat-sync technology analyzes your audio reference and synchronizes camera movements, cuts, and motion intensity to the rhythm. Perfect for music videos and promotional content.
Seedance 2.0 is available through multiple official and third-party platforms. Each offers different features, pricing, and interfaces. Here is a complete breakdown of every way to access the model.
ByteDance's official creative platform at dreamina.com. This is the most feature-complete interface for Seedance 2.0. It offers text-to-video, image-to-video, multi-reference generation, audio sync, and all advanced settings. Available globally with a credit-based system. New users receive daily free credits. Supports both the free tier and Standard/Pro subscriptions at ~69 RMB/month and ~$45/month respectively.
ByteDance's mobile-first video creation app. Available on iOS and Android, it provides a streamlined interface for Seedance 2.0 generation with generous free daily credits. The app offers a simpler workflow ideal for quick generations and mobile content creation. Supports text-to-video and image-to-video modes with basic settings control.
An AI-powered video editing platform that integrates Seedance 2.0 as one of its generation engines. Offers a collaborative editing workflow where AI-generated clips can be directly placed into a timeline editor. Particularly useful for content creators who need to combine multiple AI clips with transitions, text overlays, and effects in a single workflow.
Seedance 2.0 is available through several API providers including fal.ai, Replicate, and WaveSpeed. These platforms offer pay-per-use pricing, REST API access, webhook callbacks, and integration-friendly documentation. Ideal for developers building applications, automating content pipelines, or running batch generation workflows.
Follow these steps to generate your first Seedance 2.0 video. We will use Dreamina as the primary platform, but the concepts apply to all platforms.
Navigate to dreamina.com and click "Sign Up" in the top-right corner. You can register with an email address, Google account, or Apple ID. If you are using the Little Skylark app, download it from the App Store or Google Play and sign up within the app. Account creation is free and takes less than 60 seconds. After registration, you will receive an initial batch of free credits to start generating immediately.
Once logged in, navigate to the "Video Generation" or "AI Video" section. Look for the model selector dropdown — it may show options like "Seedance 1.0", "Seedance 2.0", or other models. Select Seedance 2.0 specifically. On Dreamina, this is typically found at the top of the generation panel. On third-party platforms like fal.ai, you will specify the model in your API request parameters. Make sure you are on version 2.0, as 1.0 has significantly fewer capabilities.
Seedance 2.0 supports multiple input modes. Text-to-Video (T2V) generates video purely from a text prompt — best for creative exploration and original content. Image-to-Video (I2V) takes a reference image and animates it according to your text prompt — ideal when you have a specific visual starting point. Multi-Reference mode accepts up to 12 input files (images, audio) combined with text — this is the most powerful mode for production-quality output. Choose the mode that fits your project needs.
This is the most critical step. A strong Seedance 2.0 prompt follows a proven formula: Subject + Motion + Scene + Camera + Style. Be specific and descriptive. Instead of writing "a cat walking", write "A ginger tabby cat walks gracefully along a rain-soaked cobblestone street in Paris, reflections shimmering in puddles. Tracking shot at eye level, slow dolly forward. Cinematic 35mm film grain, golden hour warm lighting, shallow depth of field." The more detail you provide, the better the result. Use our Prompt Formula Guide for templates and examples.
If using Image-to-Video or Multi-Reference mode, upload your reference files. For character consistency, upload a clear, well-lit reference photo of the character. For style references, upload an image that represents your desired aesthetic. For audio sync, upload an MP3 or WAV file. Use the @tag system to bind each reference: @character_name for faces, @style for aesthetics, @audio for beat synchronization. You can combine up to 12 references in a single generation.
Adjust settings to fine-tune your output. Duration: Choose between 5 seconds (~30 credits), 10 seconds (~60 credits), or 15 seconds (~90 credits). Resolution: Select from standard (720p) or high-quality (1080p). Aspect Ratio: Pick 16:9 for cinematic, 9:16 for social media vertical, or 1:1 for square. Seed: Set a specific seed number to reproduce results, or leave random for variety. CFG Scale: Higher values follow your prompt more strictly; lower values give the model more creative freedom.
Click the "Generate" button. Generation typically takes 30 seconds to 3 minutes depending on duration, resolution, and server load. While waiting, you can queue additional generations. Once complete, preview the result directly in the platform. If the output is not satisfactory, try adjusting your prompt wording, changing the seed value, or modifying the CFG scale. Download the final video in MP4 format for use in your projects.
Whether you are just starting out or pushing the boundaries of what Seedance 2.0 can do, these targeted tips will elevate your results.
@tag to bind them. This tells the model exactly which reference applies to which element: @actor_face for character identity, @background for environment style, @music for beat sync.
Copy these ready-to-use prompts directly into Seedance 2.0. Each follows the proven Subject + Motion + Scene + Camera + Style formula.
Each setting in Seedance 2.0 affects your output in specific ways. Here is a detailed explanation of every parameter and when to adjust it.
| Setting | Options | Default | When to Change |
|---|---|---|---|
| Duration | 5s / 10s / 15s | 5s | Use 5s for quick tests and social media clips. 10s for most content. 15s for narrative scenes. |
| Resolution | 720p / 1080p | 720p | Use 720p for drafts and iteration. Switch to 1080p for final renders and production use. |
| Aspect Ratio | 16:9 / 9:16 / 1:1 | 16:9 | 16:9 for YouTube/cinema, 9:16 for TikTok/Reels/Shorts, 1:1 for Instagram posts. |
| CFG Scale | 1 – 15 | 7 | Higher (8-12) for strict prompt adherence. Lower (3-6) for artistic freedom. Default works for most. |
| Seed | Random / Fixed number | Random | Fix the seed when iterating on a good result. Change seed for fresh variations. |
| Motion Intensity | Low / Medium / High | Medium | Low for portraits and product shots. Medium for general use. High for action and dance. |
| Audio Sync | Off / On | Off | Enable when you upload an audio reference and want motion synced to the beat. |
Even experienced users run into these problems. Here are the most common Seedance 2.0 issues and their solutions.
@face. Keep the face at a consistent angle throughout the prompt. Avoid describing rapid head turns. Increase CFG scale to 8-9 for stricter adherence to your reference.@tag binding. Try increasing CFG scale to enforce prompt adherence.Quick answers to the most common questions about using Seedance 2.0.
No, you can start using Seedance 2.0 for free. Dreamina offers daily free credits, the Little Skylark app provides generous free generations, and some third-party platforms offer free tiers. However, free credits are limited. For heavier usage, plans start at approximately $9.60/month (69 RMB) on Dreamina. See our complete free access guide for all methods.
Seedance 2.0 supports generating videos up to 15 seconds per clip on Dreamina. The available options are 5 seconds, 10 seconds, and 15 seconds. For longer videos, you can chain multiple clips together using the multi-shot feature or a video editor. Each duration tier uses different credit amounts: approximately 30 credits for 5s, 60 for 10s, and 90 for 15s.
Commercial usage rights depend on the plan you are subscribed to. Free tier outputs typically have restrictions. Standard and Pro plans on Dreamina include commercial usage rights. Always check the latest terms of service on your specific platform, as licensing terms may vary between Dreamina, Little Skylark, and third-party API providers.
Seedance 2.0 is a major upgrade. Key improvements include: multi-reference input (up to 12 files vs. single reference), native character consistency across shots, audio-driven generation with beat sync, the @tag binding system, higher resolution output (up to 1080p), longer duration support (15s vs. 5-8s), improved physics simulation, and significantly better prompt adherence. The model architecture was rebuilt from the ground up.
Seedance 2.0 accepts JPG, JPEG, PNG, and WebP image formats for reference uploads. For best results, use high-resolution images (at least 1024x1024 pixels) with clear subjects and good lighting. Avoid heavily compressed images, as compression artifacts can carry into the generated video. Audio references accept MP3 and WAV formats, with files up to 10MB in size.
Yes, Seedance 2.0 is available through REST APIs on platforms like fal.ai, Replicate, and WaveSpeed. These APIs support text-to-video and image-to-video generation with full parameter control. Most offer both synchronous and asynchronous (webhook) modes. See our complete API guide for authentication, endpoints, code examples, and pricing details.
On Dreamina, credit costs vary by duration: approximately 30 credits for a 5-second video, 60 credits for 10 seconds, and 90 credits for 15 seconds. Higher resolution options may cost additional credits. Free users receive a daily allocation of credits. Standard subscribers get a monthly pool of credits included in their plan. See our pricing guide for a complete cost breakdown.
Yes, this is one of Seedance 2.0's strongest features. Upload a clear reference image of your character and use the @character tag in your prompt. The model will maintain facial features, body proportions, and clothing across generations. For best results, use a well-lit, front-facing reference photo and keep the @tag consistent across all your prompts in the sequence.
Explore our library of 500+ copy-paste prompts, use the prompt generator, or dive into advanced techniques.