TUTORIAL BEGINNER FRIENDLY UPDATED FEB 2026

How to Use Seedance 2

The complete step-by-step beginner's guide to Seedance 2.0. Learn how to create stunning AI videos from text, images, and audio references — on every platform available.

What Is Seedance 2.0?

Seedance 2.0 is ByteDance's latest AI video generation model, representing a significant leap in multimodal video synthesis. Released in early 2026, it supports text-to-video, image-to-video, and multi-reference generation with industry-leading character consistency and physics simulation.

🎬

Multi-Modal Input

Accept up to 12 reference files simultaneously — images, audio clips, and text prompts can all be combined in a single generation request. The @tag system lets you bind each reference to a specific element.

🌟

Character Consistency

Native multi-shot support maintains character identity across scenes. Upload a reference face or body, tag it with @character, and reuse it across different prompts for consistent storytelling.

🎧

Audio-Driven Generation

Unique beat-sync technology analyzes your audio reference and synchronizes camera movements, cuts, and motion intensity to the rhythm. Perfect for music videos and promotional content.

Platforms for Accessing Seedance 2.0

Seedance 2.0 is available through multiple official and third-party platforms. Each offers different features, pricing, and interfaces. Here is a complete breakdown of every way to access the model.

💻

Dreamina (Official Web Platform)

ByteDance's official creative platform at dreamina.com. This is the most feature-complete interface for Seedance 2.0. It offers text-to-video, image-to-video, multi-reference generation, audio sync, and all advanced settings. Available globally with a credit-based system. New users receive daily free credits. Supports both the free tier and Standard/Pro subscriptions at ~69 RMB/month and ~$45/month respectively.

📱

Little Skylark App (Xiao Yunque)

ByteDance's mobile-first video creation app. Available on iOS and Android, it provides a streamlined interface for Seedance 2.0 generation with generous free daily credits. The app offers a simpler workflow ideal for quick generations and mobile content creation. Supports text-to-video and image-to-video modes with basic settings control.

ChatCut (AI Video Editor)

An AI-powered video editing platform that integrates Seedance 2.0 as one of its generation engines. Offers a collaborative editing workflow where AI-generated clips can be directly placed into a timeline editor. Particularly useful for content creators who need to combine multiple AI clips with transitions, text overlays, and effects in a single workflow.

🔌

Third-Party API Platforms

Seedance 2.0 is available through several API providers including fal.ai, Replicate, and WaveSpeed. These platforms offer pay-per-use pricing, REST API access, webhook callbacks, and integration-friendly documentation. Ideal for developers building applications, automating content pipelines, or running batch generation workflows.

Complete Tutorial: From Zero to First Video

Follow these steps to generate your first Seedance 2.0 video. We will use Dreamina as the primary platform, but the concepts apply to all platforms.

1

Create Your Account

Navigate to dreamina.com and click "Sign Up" in the top-right corner. You can register with an email address, Google account, or Apple ID. If you are using the Little Skylark app, download it from the App Store or Google Play and sign up within the app. Account creation is free and takes less than 60 seconds. After registration, you will receive an initial batch of free credits to start generating immediately.

2

Select the Seedance 2.0 Model

Once logged in, navigate to the "Video Generation" or "AI Video" section. Look for the model selector dropdown — it may show options like "Seedance 1.0", "Seedance 2.0", or other models. Select Seedance 2.0 specifically. On Dreamina, this is typically found at the top of the generation panel. On third-party platforms like fal.ai, you will specify the model in your API request parameters. Make sure you are on version 2.0, as 1.0 has significantly fewer capabilities.

3

Choose Your Generation Mode

Seedance 2.0 supports multiple input modes. Text-to-Video (T2V) generates video purely from a text prompt — best for creative exploration and original content. Image-to-Video (I2V) takes a reference image and animates it according to your text prompt — ideal when you have a specific visual starting point. Multi-Reference mode accepts up to 12 input files (images, audio) combined with text — this is the most powerful mode for production-quality output. Choose the mode that fits your project needs.

4

Write Your Prompt

This is the most critical step. A strong Seedance 2.0 prompt follows a proven formula: Subject + Motion + Scene + Camera + Style. Be specific and descriptive. Instead of writing "a cat walking", write "A ginger tabby cat walks gracefully along a rain-soaked cobblestone street in Paris, reflections shimmering in puddles. Tracking shot at eye level, slow dolly forward. Cinematic 35mm film grain, golden hour warm lighting, shallow depth of field." The more detail you provide, the better the result. Use our Prompt Formula Guide for templates and examples.

5

Upload Reference Files (Optional)

If using Image-to-Video or Multi-Reference mode, upload your reference files. For character consistency, upload a clear, well-lit reference photo of the character. For style references, upload an image that represents your desired aesthetic. For audio sync, upload an MP3 or WAV file. Use the @tag system to bind each reference: @character_name for faces, @style for aesthetics, @audio for beat synchronization. You can combine up to 12 references in a single generation.

6

Configure Generation Settings

Adjust settings to fine-tune your output. Duration: Choose between 5 seconds (~30 credits), 10 seconds (~60 credits), or 15 seconds (~90 credits). Resolution: Select from standard (720p) or high-quality (1080p). Aspect Ratio: Pick 16:9 for cinematic, 9:16 for social media vertical, or 1:1 for square. Seed: Set a specific seed number to reproduce results, or leave random for variety. CFG Scale: Higher values follow your prompt more strictly; lower values give the model more creative freedom.

7

Generate and Review

Click the "Generate" button. Generation typically takes 30 seconds to 3 minutes depending on duration, resolution, and server load. While waiting, you can queue additional generations. Once complete, preview the result directly in the platform. If the output is not satisfactory, try adjusting your prompt wording, changing the seed value, or modifying the CFG scale. Download the final video in MP4 format for use in your projects.

Prompt Writing Tips for Every Level

Whether you are just starting out or pushing the boundaries of what Seedance 2.0 can do, these targeted tips will elevate your results.

BEGINNER Getting Started Right

Start with clear subjects. Your prompt should always begin by identifying the main subject. "A young woman with short black hair" is far better than "someone in a scene." Be specific about age, clothing, hair, and distinguishing features so the model knows exactly what to render.
Describe one action at a time. Do not try to cram an entire story into a single prompt. Seedance 2.0 handles 5-15 second clips best when focused on a single clear action. "walks slowly toward the camera" beats "walks, then sits down, then picks up a book and reads."
Mention the environment. Always include where the scene takes place. "in a dimly lit Japanese izakaya" provides much more visual information than just "indoors." Environmental details establish mood, lighting, and context automatically.
Include a camera direction. Even a simple camera instruction dramatically improves output. "Static wide shot" or "slow dolly forward" gives the model a clear motion plan instead of randomly choosing camera behavior.

INTERMEDIATE Refining Your Output

Use cinematic terminology. Seedance 2.0 responds well to film language. Terms like "shallow depth of field," "rack focus," "anamorphic lens flare," "push-in," and "Steadicam tracking" produce more polished, professional results than generic descriptions.
Layer your lighting descriptions. Instead of just "bright" or "dark," describe the quality of light. "Warm golden hour backlight with cool blue fill from the left, volumetric haze catching light rays" creates a specific, cinematic mood that the model can faithfully reproduce.
Leverage the @tag system. When uploading references, use @tag to bind them. This tells the model exactly which reference applies to which element: @actor_face for character identity, @background for environment style, @music for beat sync.
Iterate with seed locking. When you get a result that is close but not perfect, note the seed number. Regenerate with the same seed while making small prompt adjustments. This lets you refine the composition without starting from scratch each time.

ADVANCED Pushing the Boundaries

Choreograph multi-shot sequences. Use the multi-reference system to maintain character consistency across multiple clips. Generate shot 1, use its last frame as the starting reference for shot 2, and chain them together. This creates narrative sequences that feel like a single continuous production.
Exploit audio-driven generation. Upload a music clip as an audio reference and enable beat sync. Seedance 2.0 will analyze the BPM and rhythm structure to synchronize camera cuts, movement intensity, and scene transitions to the music. This is the fastest path to professional music video output.
Combine negative prompting with CFG control. While Seedance 2.0's negative prompting is less prominent than in image models, you can still specify what to avoid. Pair this with CFG scale adjustments — higher CFG (7-9) for strict prompt adherence, lower (3-5) for more organic, surprising results.
Build custom pipelines with the API. For production workflows, use the Seedance 2.0 API to automate generation. Chain multiple API calls with webhooks to create automated content pipelines: generate a clip, extract the last frame, feed it into the next generation, and assemble the final video programmatically.

Example Prompts to Get You Started

Copy these ready-to-use prompts directly into Seedance 2.0. Each follows the proven Subject + Motion + Scene + Camera + Style formula.

CINEMATIC PORTRAIT
A weathered fisherman in his 60s with deep sun lines and a salt-crusted wool cap turns slowly to face the camera, his expression shifting from distant contemplation to a warm, knowing smile. Slow push-in from medium shot to extreme close-up, shallow depth of field with bokeh. Shot on ARRI Alexa, 85mm lens, golden hour backlight creating a warm rim light around his silhouette, desaturated teal shadows, documentary realism.
Portrait Cinematic Character Documentary
URBAN MOTION
A professional street dancer wearing an oversized vintage denim jacket and red sneakers launches into an explosive breakdance sequence on rain-slicked pavement, spinning on one hand while neon reflections streak beneath them. Low angle tracking shot circling the dancer at 180 degrees, speed ramping from normal to slight slow motion during the spin. Cyberpunk noir aesthetic, neon pink and electric blue color palette, volumetric rain, anamorphic lens flares, 24fps cinematic grain.
Action Urban Neon Dynamic
NATURE LANDSCAPE
A vast alpine meadow blanketed in wildflowers at the base of jagged snow-capped peaks sways gently in a warm breeze, clouds rolling across the mountain faces casting traveling shadows over the landscape. Ascending drone shot rising from ground level through the flower field up to reveal the full mountain panorama, smooth and continuous. National Geographic photography style, hyper-real clarity, rich saturated greens and purples, magic hour warm light, 8K resolution feel.
Landscape Nature Drone 4K
PRODUCT SHOWCASE
A luxury mechanical watch with an exposed skeleton dial and rose gold casing rotates slowly on a reflective black obsidian surface, each gear and spring moving with precise mechanical rhythm, light catching the polished metal edges. Macro orbit shot circling the watch at 45 degrees, focus pulling between the dial face and the crown detail. High-end product photography, studio lighting with three-point setup, pure black background, specular highlights, luxury brand commercial aesthetic.
Product Commercial Macro Luxury
Browse All 500+ Prompts →

Understanding Every Seedance 2.0 Setting

Each setting in Seedance 2.0 affects your output in specific ways. Here is a detailed explanation of every parameter and when to adjust it.

Setting Options Default When to Change
Duration 5s / 10s / 15s 5s Use 5s for quick tests and social media clips. 10s for most content. 15s for narrative scenes.
Resolution 720p / 1080p 720p Use 720p for drafts and iteration. Switch to 1080p for final renders and production use.
Aspect Ratio 16:9 / 9:16 / 1:1 16:9 16:9 for YouTube/cinema, 9:16 for TikTok/Reels/Shorts, 1:1 for Instagram posts.
CFG Scale 1 – 15 7 Higher (8-12) for strict prompt adherence. Lower (3-6) for artistic freedom. Default works for most.
Seed Random / Fixed number Random Fix the seed when iterating on a good result. Change seed for fresh variations.
Motion Intensity Low / Medium / High Medium Low for portraits and product shots. Medium for general use. High for action and dance.
Audio Sync Off / On Off Enable when you upload an audio reference and want motion synced to the beat.
Pro tip: When starting a new project, always begin with default settings and a 5-second duration. This minimizes credit usage while you iterate on your prompt. Once you have a prompt that produces the composition and style you want, switch to higher resolution and longer duration for the final render.

Common Issues and How to Fix Them

Even experienced users run into these problems. Here are the most common Seedance 2.0 issues and their solutions.

Character faces look distorted or morphing mid-clip
This usually happens when the prompt is vague about facial features. Fix: Upload a clear, high-resolution face reference image and tag it with @face. Keep the face at a consistent angle throughout the prompt. Avoid describing rapid head turns. Increase CFG scale to 8-9 for stricter adherence to your reference.
Camera movement is jittery or unpredictable
Fix: Be more specific with camera instructions. Instead of "the camera moves," write "smooth Steadicam tracking shot moving left to right at walking pace." Use professional camera terms like dolly, crane, orbit, push-in. Set motion intensity to "Low" for smoother camera paths.
Generated video ignores parts of the prompt
Fix: Prompts that are too long can cause the model to skip elements. Keep your prompt focused on the most important elements. Prioritize: Subject first, then motion, then environment, then camera, then style. If using many references, make sure each one has a clear @tag binding. Try increasing CFG scale to enforce prompt adherence.
Text or logos in the video appear garbled
Fix: Current AI video models, including Seedance 2.0, struggle with rendering legible text. If you need text in your video, use Image-to-Video mode with a reference image that contains the pre-rendered text. The model will preserve the text from the source image much more reliably than generating it from scratch.
Generation takes extremely long or fails
Fix: Long queue times usually indicate high server demand. Try generating during off-peak hours. If generation fails entirely, check that your uploaded files are in supported formats (JPG, PNG for images; MP3, WAV for audio). File size should be under 10MB. Reduce resolution to 720p for faster processing.
Output looks too similar across different seeds
Fix: If all your outputs look identical, your prompt may be too constrained. Try lowering CFG scale to 4-5 to give the model more creative room. Alternatively, significantly change the environment description or camera angle while keeping the subject the same. Different aspect ratios also produce meaningfully different compositions.

Frequently Asked Questions

Quick answers to the most common questions about using Seedance 2.0.

No, you can start using Seedance 2.0 for free. Dreamina offers daily free credits, the Little Skylark app provides generous free generations, and some third-party platforms offer free tiers. However, free credits are limited. For heavier usage, plans start at approximately $9.60/month (69 RMB) on Dreamina. See our complete free access guide for all methods.

Seedance 2.0 supports generating videos up to 15 seconds per clip on Dreamina. The available options are 5 seconds, 10 seconds, and 15 seconds. For longer videos, you can chain multiple clips together using the multi-shot feature or a video editor. Each duration tier uses different credit amounts: approximately 30 credits for 5s, 60 for 10s, and 90 for 15s.

Commercial usage rights depend on the plan you are subscribed to. Free tier outputs typically have restrictions. Standard and Pro plans on Dreamina include commercial usage rights. Always check the latest terms of service on your specific platform, as licensing terms may vary between Dreamina, Little Skylark, and third-party API providers.

Seedance 2.0 is a major upgrade. Key improvements include: multi-reference input (up to 12 files vs. single reference), native character consistency across shots, audio-driven generation with beat sync, the @tag binding system, higher resolution output (up to 1080p), longer duration support (15s vs. 5-8s), improved physics simulation, and significantly better prompt adherence. The model architecture was rebuilt from the ground up.

Seedance 2.0 accepts JPG, JPEG, PNG, and WebP image formats for reference uploads. For best results, use high-resolution images (at least 1024x1024 pixels) with clear subjects and good lighting. Avoid heavily compressed images, as compression artifacts can carry into the generated video. Audio references accept MP3 and WAV formats, with files up to 10MB in size.

Yes, Seedance 2.0 is available through REST APIs on platforms like fal.ai, Replicate, and WaveSpeed. These APIs support text-to-video and image-to-video generation with full parameter control. Most offer both synchronous and asynchronous (webhook) modes. See our complete API guide for authentication, endpoints, code examples, and pricing details.

On Dreamina, credit costs vary by duration: approximately 30 credits for a 5-second video, 60 credits for 10 seconds, and 90 credits for 15 seconds. Higher resolution options may cost additional credits. Free users receive a daily allocation of credits. Standard subscribers get a monthly pool of credits included in their plan. See our pricing guide for a complete cost breakdown.

Yes, this is one of Seedance 2.0's strongest features. Upload a clear reference image of your character and use the @character tag in your prompt. The model will maintain facial features, body proportions, and clothing across generations. For best results, use a well-lit, front-facing reference photo and keep the @tag consistent across all your prompts in the sequence.

Ready to Create Your First Seedance 2 Video?

Explore our library of 500+ copy-paste prompts, use the prompt generator, or dive into advanced techniques.