
Wan 2.6 Is Live: What to Know Before Wan 2.7 Lands

Quick answer
Wan 2.6 is now live in AI Video Maker, and it is the version you should plan around today. In our current rollout, Wan 2.6 supports text-to-video and image-to-video workflows with 720P and 1080P output, 5 to 15 second durations, single or multi-shot generation, and prompt extension. Wan 2.7 is coming soon, but as of March 29, 2026, the official public Alibaba material we reviewed documents Wan 2.6 in detail and does not publish a public Wan 2.7 feature sheet yet, so treat Wan 2.7 as an upcoming roadmap item rather than a confirmed spec list.
What Wan 2.6 means in AI Video Maker
Wan 2.6 matters because it pushes the Wan family from "good short clips" toward more controllable narrative video generation. Alibaba's December 16, 2025 launch note for the Wan2.6 series says the upgrade adds intelligent multi-shot storytelling, improved audio-visual synchronization, and support for video outputs of up to 15 seconds. That lines up with what creators actually want when they move from proof-of-concept clips to ads, explainers, short scenes, and social content. Source: Alibaba Cloud Community, December 16, 2025.
Inside AI Video Maker, Wan 2.6 is the live release you can use now. Our product copy currently describes it as supporting text-to-video and image-to-video, 720P or 1080P output, 5 to 15 second durations, single or multi-shot generation, and prompt extension. If you want to test it immediately, start with the Text to Video workflow for prompt-led scenes or the Image to Video workflow when you already have a strong first frame.
Entity definitions
- Wan model family: Alibaba's video generation model line for text-to-video, image-to-video, and related creation workflows.
- Wan 2.6: The current production release in this post, positioned around higher-quality narrative control, audio-aware generation, and 720P or 1080P outputs.
- Wan 2.7: The next planned version in our roadmap; at the time of writing, it is an upcoming release rather than a publicly documented model spec.
- Multi-shot narrative: A generation mode where the model creates shot changes within one output clip instead of keeping a single continuous camera setup.
- Prompt extension: A model-assisted rewrite step that expands a short prompt into a more descriptive instruction set before generation.
Wan 2.6 vs earlier Wan releases
The fastest way to understand Wan 2.6 is to compare it with the recent Wan versions documented in Alibaba Cloud Model Studio.
| Version | Officially documented traits | Practical takeaway |
|---|---|---|
| Wan 2.1 | Text-to-video and image-to-video, mostly 3 to 5 second outputs, 480P or 720P, no audio in the listed variants | Useful for short basic motion tests, but limited for narrative work |
| Wan 2.2 | Better stability and success rate than 2.1, some 1080P options, still mostly 5-second clips, no audio in the listed text/image variants | Better reliability, but still more "clip generation" than scene construction |
| Wan 2.5 preview | Audio support and audio-video sync, 5 or 10 second durations, 480P to 1080P | A bridge toward richer video outputs with sound |
| Wan 2.6 | Multi-shot narrative, audio-video sync, text/audio and text-image-audio inputs, 720P or 1080P, 2 to 15 second duration options depending on region | The first version in the recent sequence that clearly targets longer, more cinematic, story-oriented outputs |
Source: Alibaba Cloud Model Studio video generation docs, updated March 23, 2026.
What is confirmed about Wan 2.6 today
Here is the short version of what is publicly documented and safe to plan around now:
- Alibaba said on December 16, 2025 that Wan2.6 introduced multi-shot storytelling, better audio-visual synchronization, and outputs of up to 15 seconds across the upgraded Wan2.6 family. Source: Alibaba Cloud Community launch post.
- Alibaba's video generation documentation updated March 23, 2026 lists
wan2.6-t2vandwan2.6-i2vas recommended models with audio support, multi-shot narrative support, audio-video sync, and 720P or 1080P output. Source: Video generation docs. - The public Wan text-to-video API reference updated March 3, 2026 says the Wan 2.6 text-to-video series supports prompts up to 1,500 characters,
prompt_extend, and theshot_typecontrol for single or multi-shot generation. Source: Wan text-to-video API reference. - Alibaba's model release notes show
wan2.6-t2vadded on December 16, 2025 andwan2.6-t2v-usadded on January 4, 2026, which is a useful signal that the 2.6 rollout expanded after launch. Source: Newly released models.
That combination is why Wan 2.6 is a meaningful upgrade for creators who care about short narrative clips, product scenes, UGC-style ad videos, or storyboard-like edits without jumping straight into a heavier reference-to-video workflow.
How to use Wan 2.6 well right now
1. Start with a short, single-shot brief
If the goal is concept validation, do not begin with a dense story prompt. Start with one camera setup, one subject, one action, and one lighting direction. This makes it easier to judge whether the model understands the subject, motion, and tone before you add cut structure.
2. Use Wan 2.6 for the right input type
Use Text to Video when your idea is driven by scene language, camera language, and pacing. Use Image to Video when subject consistency matters more and you already have a hero image, product render, or character frame to anchor the output.
3. Turn on the narrative only when the scene needs it
Wan 2.6 is most interesting when you want the model to move beyond a single loop-like shot. The official API reference says shot_type is supported on the 2.6 series and takes effect with prompt_extend, so multi-shot generation is a deliberate control choice, not just a vague prompt suggestion. Source: Wan text-to-video API reference.
4. Treat resolution as a finishing choice
Because Wan 2.6 supports 720P and 1080P, it is worth proving the motion and composition first, then deciding which version deserves your higher-quality run. If you are deciding how that fits your account tier and usage budget, review the current Pricing page before scaling up production.
5. Save examples and prompts for launch-to-launch comparison
If you already know Wan 2.7 is coming soon, the smart move is to save your best Wan 2.6 prompts and first-frame assets now. That gives you a clean A/B test set for the day Wan 2.7 lands.
How to think about Wan 2.7
As of March 29, 2026, we have a clear public picture of Wan 2.6, but not a public official Wan 2.7 capability sheet from the Alibaba surfaces reviewed for this post. That means the right question is not "What exact features will Wan 2.7 have?" The right question is "What should I test first when Wan 2.7 becomes available?"
These are the checks that matter most:
- Subject consistency across cuts: Does Wan 2.7 keep faces, wardrobe, products, and props stable when shot changes occur?
- Audio and dialogue alignment: Does the next release improve sync quality or make audio-driven prompts easier to control?
- Useful duration flexibility: Does it improve the quality of longer clips, or simply allow longer durations on paper?
- Prompt reliability: Does it need less rewriting to get the same result, especially for nuanced scene direction?
- Speed to usable output: Does it reduce the number of reruns needed to get one keeper clip?
Until public release notes exist, those should be treated as evaluation criteria, not promises.
Recommended AI Video Maker tools
- Text to Video: Best for testing scene ideas, camera language, and prompt structure with Wan 2.6.
- Image to Video: Best when you want stronger subject lock from an existing frame or design asset.
- Pricing: Useful for deciding when to move from draft exploration into repeatable production.
- Blog: Keep an eye on launch notes and model update explainers as new releases arrive.
Frequently Asked Questions
What is Wan 2.6?
Wan 2.6 is the current Wan release now live in AI Video Maker for text-to-video and image-to-video creation. In our rollout, it supports 720P or 1080P output, 5 to 15 second durations, single or multi-shot generation, and prompt extension.
Is Wan 2.7 available now?
Not yet in this blog's timeframe. Wan 2.7 is the next planned release for us, but as of March 29, 2026 the public Alibaba documentation reviewed here is centered on Wan 2.6 rather than a published Wan 2.7 spec.
Does Wan 2.6 support both text-to-video and image-to-video?
Yes. AI Video Maker currently positions Wan 2.6 for both workflows, and Alibaba's public Model Studio documentation separately documents Wan 2.6 text-to-video and image-to-video variants.
How long can Wan 2.6 videos be?
That depends on the surface and region. Alibaba's public documentation shows Wan 2.6 variants supporting up to 15 seconds, with some listings showing fixed choices like 5, 10, and 15 seconds and others showing an integer range from 2 to 15 seconds.
What should I test first when Wan 2.7 lands?
Start by rerunning your best Wan 2.6 prompts and first-frame assets. Compare consistency, motion quality, audio sync, prompt adherence, and the number of reruns needed before you decide whether Wan 2.7 should replace Wan 2.6 in your main workflow.
Should I wait for Wan 2.7 or use Wan 2.6 now?
Use Wan 2.6 now if you need production output today. Waiting only makes sense if your workflow depends on a specific future capability that has not been publicly documented yet.
Sources
- Alibaba Cloud Community: Alibaba Unveils Wan2.6 Series Enabling Everyone to Star in Videos (December 16, 2025)
- Alibaba Cloud Model Studio: Video generation (Last updated March 23, 2026)
- Alibaba Cloud Model Studio: Wan text-to-video API reference (Last updated March 3, 2026)
- Alibaba Cloud Model Studio: Newly released models (reviewed March 29, 2026)
- arXiv: Wan: Open and Advanced Large-Scale Video Generative Models (submitted March 26, 2025; revised April 19, 2025)