Seedance 2.0 Is Live on AI Video Maker: First Workflow Guide

Seedance 2.0 Is Live on AI Video Maker: First Workflow Guide

AI Video Team
AI Video Team

Quick answer

Seedance 2.0 on AI Video Maker is now live for text-to-video and image-to-video creation. In the current rollout, creators can generate 4s, 5s, 6s, 8s, 10s, or 15s clips, choose 480P or 720P, and use 16:9, 9:16, or 1:1 ratios for text-to-video. Start with Text to Video when your scene begins as a prompt, or switch to Image to Video when you already have a strong first frame.

Seedance 2.0 on AI Video Maker: what is live now

ByteDance officially launched Seedance 2.0 on February 12, 2026 and positioned it as a unified multimodal model for text, image, audio, and video inputs, with an emphasis on instruction following, motion quality, and audio-video generation (ByteDance Seed launch post, ByteDance Seed model page). TechCrunch later reported on March 15, 2026 that ByteDance had delayed parts of its broader global rollout. That makes the current AI Video Maker launch practical news for creators who want a working Seedance 2.0 workflow today instead of waiting on a changing external access path (TechCrunch).

As of April 10, 2026, the current AI Video Maker rollout exposes the two workflows most creators use first:

  • Text-to-video with prompt input, 480P or 720P output, 4s, 5s, 6s, 8s, 10s, or 15s durations, and 16:9, 9:16, or 1:1 aspect ratios
  • Image-to-video with one uploaded source image and optional prompt guidance
  • Credits billed by duration and resolution: 10 credits/second at 480P, or 20 credits/second at 720P

Entity definitions

  • Seedance 2.0: ByteDance Seed's multimodal video model, designed for creation and editing workflows that can use text, images, audio, and video as inputs.
  • Text-to-video: A workflow where a written scene prompt becomes the main instruction for generating a video clip.
  • Image-to-video: A workflow where an uploaded image becomes the visual anchor and the model animates it with optional prompt guidance.
  • Credit-per-second billing: AI Video Maker's pricing logic for advanced models where total credits scale with output duration and selected resolution.

Why Seedance 2.0 matters for creators

The official Seedance 2.0 positioning is less about novelty clips and more about controlled cinematic generation. ByteDance emphasizes reference-driven control, motion stability, audio-video generation, and director-style handling of lighting, shadow, and camera movement (ByteDance Seed model page). In practical creator terms, that makes Seedance 2.0 a strong fit for:

  • short launch trailers
  • product teaser shots
  • cinematic social clips
  • storyboarded ads
  • first-frame animation where subject consistency matters

The key difference is workflow style. If your idea depends on framing, motion language, or a locked hero image, Seedance 2.0 is usually more interesting than a pure sandbox model. If you are still brainstorming loosely, it still helps to narrow the scene first before you spend credits on longer or higher-resolution runs.

Current Seedance 2.0 settings on AI Video Maker

Use this as the practical settings checklist for the current live product surface:

AreaCurrent AI Video Maker rollout
WorkflowsText-to-video and image-to-video
Duration options4s, 5s, 6s, 8s, 10s, 15s
Resolution480P and 720P
Text-to-video aspect ratios16:9, 9:16, 1:1
Credit cost10 credits/s at 480P, 20 credits/s at 720P
Login requirementSign-in required before generation
Current image restrictionReal-person images are not supported for Seedance 2.0 image-to-video

That last restriction matters. If your source image is a real-person photo, Seedance 2.0 in AI Video Maker currently blocks that workflow. Use a product image, character design, stylized illustration, environment frame, or non-real-person visual instead.

Text-to-video workflow with Seedance 2.0

Start in Text to Video when the idea begins as a scene description, ad concept, or storyboard note.

1. Choose the aspect ratio first

Seedance 2.0 text-to-video in AI Video Maker supports 16:9, 9:16, and 1:1. Decide that before refining the prompt because framing changes how you describe the shot.

  • Use 9:16 for TikTok, Reels, Shorts, and mobile ads
  • Use 16:9 for landing pages, YouTube, and widescreen storyboards
  • Use 1:1 for centered compositions that may need flexible social crops

2. Keep the first prompt compact

Seedance 2.0 does not need an overloaded prompt to show its strengths. Start with one subject, one action, one camera cue, and one mood.

Try this format:

Subject: a premium sneaker on a reflective platform
Action: the shoe slowly rotates as mist moves behind it
Camera: slow push-in with shallow depth of field
Lighting: dark studio with pink and teal edge light
Mood: cinematic, premium, calm

3. Draft in 480P before moving to 720P

Because credits scale directly with resolution, the cheaper first pass is usually the right one. Run the concept in 480P first to judge motion, framing, and prompt fit. If the direction is already strong, move the same idea to 720P for the keeper pass.

Image-to-video workflow with Seedance 2.0

Use Image to Video when subject consistency matters more than open-ended exploration. A strong source frame makes it easier to hold onto product shape, character silhouette, brand colors, or composition.

1. Start with a clean source image

The best Seedance 2.0 first-frame inputs are:

  • product renders or product photos
  • stylized character art
  • fashion portraits that are not real-person source photos
  • environment stills
  • poster-style concept frames

Avoid busy collages or images with too many competing focal points. Cleaner source material gives you a cleaner motion test.

2. Tell the model what to preserve

For image-to-video, the most valuable prompt line is often the constraint line. State what must stay stable.

Example:

Animate the uploaded image as an 8-second cinematic reveal.
Keep the product shape, logo placement, and color palette consistent.
Add a slow push-in, soft reflective highlights, and subtle background motion.
Do not add extra objects, extra text, or material distortion.

3. Use Seedance 2.0 when the first frame already carries the idea

If the core value is in the starting visual, image-to-video is the faster route. If the core value is in the scene description, camera rhythm, or story beat, text-to-video is usually the better first test.

Seedance 2.0 credits on AI Video Maker

Seedance 2.0 uses credit-per-second pricing in AI Video Maker:

ResolutionCredit rateCommon examples
480P10 credits/second4s = 40, 8s = 80, 15s = 150
720P20 credits/second4s = 80, 8s = 160, 15s = 300

That pricing makes 480P the sensible exploration mode. Use 720P when the motion, composition, and prompt direction are already validated. If you need to plan usage before a larger batch, check the current Pricing page first.

What AI Video Maker does not expose yet

ByteDance's official Seedance 2.0 materials describe text, image, audio, and video inputs in the broader model family. The current AI Video Maker rollout is narrower and more practical: text-to-video and image-to-video first. That distinction matters because it keeps the blog accurate to the product you can use right now rather than the full upstream model surface.

For now, treat AI Video Maker's Seedance 2.0 workflow as a short-form cinematic generation option with two entry points:

  • text prompt first
  • first-frame image first

If audio-led or video-reference-led inputs arrive later, they should be documented as a separate update rather than implied here.

First Seedance 2.0 prompts to try

Product teaser

A matte black watch on a glossy pedestal, slow cinematic push-in,
subtle drifting fog, premium lighting with pink and teal reflections,
high-detail textures, luxury launch trailer mood.

Vertical creator ad

A creator at a clean desk reacts to a new product reveal on a phone,
soft daylight from the side, handheld but stable camera feel,
energetic short-form ad pacing, vertical composition.

Stylized image-to-video animation

Animate the uploaded illustration into an 8-second cinematic shot.
Keep the subject silhouette and costume details consistent.
Add slow environmental motion, slight hair movement, and a subtle camera push.
Do not add extra characters or deform the face.
  • Text to Video: Use this when your Seedance 2.0 workflow starts with language, not a locked first frame.
  • Image to Video: Use this when you already have the visual anchor and want controlled motion from it.
  • Pricing: Use this before running repeated 720P tests or longer batches.
  • Blog: Follow additional model launch notes, prompt guides, and workflow updates.

Frequently Asked Questions

What is Seedance 2.0 on AI Video Maker?

Seedance 2.0 on AI Video Maker is a live model option for text-to-video and image-to-video generation. In the current rollout, it supports 480P and 720P output, short-form durations up to 15s, and flexible framing for text-to-video.

Does Seedance 2.0 on AI Video Maker support both text-to-video and image-to-video?

Yes. The current product rollout supports both workflows. Text-to-video is the better choice for scene-led ideas, while image-to-video is the better choice when you want to animate an existing visual anchor.

How many credits does Seedance 2.0 use?

Seedance 2.0 uses 10 credits per second at 480P and 20 credits per second at 720P. That means an 8-second clip costs 80 credits at 480P or 160 credits at 720P.

Which aspect ratios are available for Seedance 2.0 on AI Video Maker?

For text-to-video, the current UI supports 16:9, 9:16, and 1:1. Image-to-video currently follows the source-frame workflow rather than exposing the same ratio selector in the live form.

Can I use real-person images with Seedance 2.0 image-to-video?

Not in the current AI Video Maker rollout. Seedance 2.0 currently does not support generating videos from real-person images, so use product visuals, stylized art, or other non-real-person source images instead.

Does AI Video Maker expose Seedance 2.0 audio or video inputs yet?

Not in the current rollout described here. ByteDance documents broader multimodal inputs for Seedance 2.0, but AI Video Maker currently exposes text-to-video and image-to-video only.

Should I start in 480P or 720P?

Start in 480P unless you already know the concept is strong. It is the more efficient way to test motion and prompt fit, then move to 720P only when the scene is worth the higher-credit final pass.

Sources