Wan 2.7 Is Live on AI Video Maker: First Workflow Guide

Wan 2.7 Is Live on AI Video Maker: First Workflow Guide

AI Video Team
AI Video Team

Quick answer

Wan 2.7 is now live on AI Video Maker for text-to-video and image-to-video creation. In this rollout, you can generate 5, 10, or 15 second clips, choose 720P or 1080P output, use prompt extension, and use aspect ratio controls for text-to-video. Start with Text to Video when the scene begins as a prompt, or Image to Video when you already have a strong first frame.

Wan 2.7 is live: what changed for AI Video Maker users

Wan 2.7 gives AI Video Maker creators a new model option for more expressive short-form video generation. Our current product rollout exposes Wan 2.7 for the two workflows most creators use first: prompt-led text-to-video and first-frame image-to-video.

As of April 6, 2026, the practical settings inside AI Video Maker are:

  • Inputs: text prompt, or one image plus a prompt
  • Durations: 5 seconds, 10 seconds, or 15 seconds
  • Resolution: 720P or 1080P
  • Prompt extension: optional
  • Text-to-video aspect ratios: 16:9, 9:16, 1:1, 4:3, and 3:4
  • Credits: 10 credits per second at 720P, or 15 credits per second at 1080P

Alibaba Cloud Model Studio listed Wan2.7 Video in its April 3, 2026 update, including Wan2.7-T2V, Wan2.7-I2V, Wan2.7-R2V, and Wan2.7-Videoedit. That broader provider surface is useful context, but this article focuses on the settings available in AI Video Maker today (Model Studio).

Entity definitions

  • Wan 2.7: The latest Wan model generation now available in AI Video Maker for text-to-video and image-to-video workflows.
  • Wan model family: Alibaba's video foundation model family for generation and editing tasks; the Wan technical report describes Wan as a suite of video foundation models covering multiple downstream applications (arXiv).
  • Text-to-video: A workflow where the model turns a written scene prompt into a video clip.
  • Image-to-video: A workflow where the model uses an image as the first visual anchor, then animates it according to the prompt.
  • Prompt extension: A rewrite step that expands a short prompt into a more detailed model instruction before generation.

When to use Wan 2.7

Use Wan 2.7 when your goal is a polished short clip rather than a rough prompt sandbox. The strongest first tests are scenes where expressiveness, motion, and shot rhythm matter more than raw speed.

Good starting use cases include:

  • product teaser shots where a still image needs controlled motion
  • cinematic social clips with a clear camera move
  • character or fashion visuals where expression and pose matter
  • short launch ads that need vertical and horizontal versions
  • storyboard tests where you want 5, 10, and 15 second variations from the same concept

If you are still exploring a broad idea, start with a lower-friction draft first. If you already know the subject, action, camera direction, and mood, Wan 2.7 is a better candidate for the keeper pass.

Text-to-video workflow for Wan 2.7

Start with Text to Video when the creative idea exists mainly as language: a shot description, ad concept, scene brief, or moodboard note.

1. Pick the aspect ratio before writing the final prompt

Wan 2.7 text-to-video in AI Video Maker exposes 16:9, 9:16, 1:1, 4:3, and 3:4 aspect ratios. Choose that first because framing changes how the prompt should be written.

For example:

  • Use 9:16 for TikTok, Reels, Shorts, and mobile-first ads.
  • Use 16:9 for landing page hero footage, YouTube, or widescreen storyboards.
  • Use 1:1 when you want a centered product or character composition that can crop cleanly.

2. Keep the first prompt compact

Wan 2.7 works best when the prompt gives the model a clear subject, action, camera cue, environment, and mood. Avoid stacking five scenes into one prompt on the first pass.

Try this structure:

Subject: a matte black wireless headphone on a reflective table
Action: the product slowly rotates as soft mist moves behind it
Camera: slow push-in, shallow depth of field
Lighting: dark studio, pink and teal edge lights
Mood: premium, calm, cinematic

3. Use prompt extension for cinematic polish

Prompt extension is useful when your base idea is clear but under-described. Turn it on when you want the model to expand a short creative brief into richer visual direction.

Leave it off when you are testing a precise instruction and want to isolate one variable at a time.

Image-to-video workflow for Wan 2.7

Use Image to Video when subject consistency matters. If you already have a product render, character frame, fashion look, or brand visual, the image gives Wan 2.7 a stronger visual anchor than text alone.

Alibaba's current Wan image-to-video API reference for the broader provider model documents first-frame, first-and-last-frame, and continuation modes, plus parameters such as resolution, duration, and prompt rewriting (Wan image-to-video API reference). In AI Video Maker's current launch surface, the practical path is first-frame image-to-video: upload a source image, write the motion prompt, then choose duration and resolution.

Use this prompt pattern:

Animate the uploaded image as a 10-second cinematic product reveal.
Keep the product shape and color consistent.
Add a slow camera push-in, subtle reflective highlights, and soft background motion.
Avoid warping the logo, changing the product material, or adding extra text.

That last line matters. For image-to-video, tell the model what not to change: face identity, product shape, brand colors, logo placement, clothing, or background geometry.

Wan 2.7 settings and credit planning

Wan 2.7 uses credits in AI Video Maker because it is an advanced model option. Credits are calculated by duration and resolution:

SettingCredit rateExample
720P10 credits per second5s = 50 credits, 10s = 100 credits, 15s = 150 credits
1080P15 credits per second5s = 75 credits, 10s = 150 credits, 15s = 225 credits

Use 720P for exploration when you are still judging motion and prompt fit. Move to 1080P when the direction is already strong enough to justify the higher-quality pass. If you need to plan usage before a larger batch, check the current Pricing page first.

How Wan 2.7 compares with Wan 2.6 in this rollout

The most important practical difference is not a vague "better model" claim. It is the control surface exposed in AI Video Maker.

AreaWan 2.6 rolloutWan 2.7 rollout
WorkflowsText-to-video and image-to-videoText-to-video and image-to-video
Duration5, 10, or 15 seconds5, 10, or 15 seconds
Resolution720P or 1080P720P or 1080P
Prompt extensionSupportedSupported
Text-to-video framingSingle-shot or multi-shot controlAspect ratio control: 16:9, 9:16, 1:1, 4:3, 3:4
Best first testNarrative clip structureExpressive visual direction and format-specific framing

Alibaba's broader Wan2.7 documentation also includes video editing concepts. For example, the Wan2.7 video editing API reference documents reference media, 720P/1080P output, optional ratio control, and prompt rewriting for the provider's editing endpoint (Wan video editing API reference). That does not mean every provider-level capability is exposed in AI Video Maker's launch UI today, but it helps explain the direction of the Wan 2.7 family.

First prompts to try

Product teaser

A premium skincare bottle on a glossy black surface, slow camera push-in,
soft pink and teal rim lighting, subtle vapor behind the product,
cinematic macro detail, clean luxury commercial style.

Fashion portrait

A confident model in a silver jacket turns slightly toward the camera,
natural facial expression, wind moving the fabric, dramatic studio lighting,
smooth cinematic motion, high-fashion campaign mood.

Vertical social ad

A creator holds a phone showing a new app concept, quick expressive reaction,
soft desk lighting, handheld but stable camera feel, energetic short-form ad,
vertical composition with clean negative space near the top.

Image-to-video product motion

Animate the uploaded product image with a slow rotating reveal and subtle
light sweep. Keep the product label, shape, and material consistent. Do not
add extra text, extra objects, or logo distortion.
  • Text to Video: Use this when you want Wan 2.7 to create a full scene from a written prompt.
  • Image to Video: Use this when you have a first frame, product shot, character frame, or brand visual to preserve.
  • Pricing: Use this before running larger 1080P batches or repeated 15-second tests.
  • Blog: Follow model launch notes, workflow guides, and prompt updates as the AI Video Maker stack changes.

Frequently Asked Questions

What is Wan 2.7 on AI Video Maker?

Wan 2.7 is a new AI Video Maker model option for text-to-video and image-to-video generation. It supports 5, 10, and 15 second clips, 720P or 1080P output, and optional prompt extension in the current rollout.

Does Wan 2.7 support image-to-video?

Yes. Use the Image to Video workflow when you want to animate an existing image while preserving the subject, product, or visual style.

How many credits does Wan 2.7 use?

Wan 2.7 uses 10 credits per second at 720P and 15 credits per second at 1080P in AI Video Maker. That means a 10-second 720P clip uses 100 credits, while a 10-second 1080P clip uses 150 credits.

Should I use Wan 2.7 for text-to-video or image-to-video first?

Use text-to-video first when you are testing scene ideas, camera direction, or aspect ratios. Use image-to-video first when you already have a strong source image and want better subject consistency.

Is Wan 2.7 the same as Wan2.7-Videoedit?

No. Wan2.7-Videoedit is a provider-level editing API documented by Alibaba Cloud Model Studio, while AI Video Maker's current Wan 2.7 launch focuses on text-to-video and image-to-video generation. We will keep future launch notes specific to the capabilities exposed in the product UI.

How should I get the best Wan 2.7 result?

Start with one subject, one action, one camera cue, and one mood. Run a 720P draft first, improve only one prompt variable at a time, then switch to 1080P when the composition and motion are already strong.

Sources