Browse ModelsMidjourneyMidjourney Image To Video

Midjourney Image To Video

Midjourney Image To Video

Playground

Try it on WavespeedAI!

Midjourney Image-to-Video turns a single image into an artistically rich, high-quality video using Midjourney’s creative AI. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.

Features

Midjourney — Image-to-Video Model

Midjourney Image-to-Video transforms static artworks or illustrations into cinematic video clips with motion, lighting, and depth — while maintaining Midjourney’s signature artistic fidelity. This model captures subtle camera movement, breathing effects, and dynamic perspective shifts, bringing your images vividly to life.


✨ Why It Looks Great

  • Midjourney-grade aesthetic — built on the same visual backbone as the legendary Midjourney image engine.
  • Artistic motion synthesis — intelligently infers camera movement, light flicker, and environmental animation.
  • High visual consistency — preserves texture, detail, and lighting from the input image.
  • Multi-resolution support — generate videos at 480p or 720p resolution.
  • Tunable creativity — control motion randomness and artistic deformation with parameters like chaos, stylize, and weird.

⚙️ Limits and Performance

  • Input: one reference image (JPEG / PNG / WebP).

  • Output duration: fixed short cinematic clip (~5 seconds).

  • Resolutions:

    • 480p for faster, lower-cost generation.
    • 720p for enhanced clarity and professional output.
  • Supported aspect ratios: 1:1, 16:9, 4:3, 3:4, 2:3, 9:16, etc.

  • Estimated processing time: 30–60 seconds per video (depending on prompt complexity).


💰 Pricing

ResolutionExample Price
480p$0.15
720p$0.48

🧠 Parameters Overview

ParameterDescription
resolutionChoose 480p or 720p.
aspect_ratioControls video composition (e.g., 16:9 for widescreen).
qualityHigher values produce more refined visuals but increase render time.
stylizeAdjusts how strongly Midjourney’s artistic style influences the animation.
chaosAdds controlled randomness to camera and scene motion.
weirdIntroduces surreal, experimental effects for creative exploration.
versionModel version selector (7 is current).
nijiToggles anime-style adaptation (close for realistic style).
seedFix this for reproducible motion results.

🚀 How to Use

  1. Upload your image as the base frame.

  2. Write a prompt describing the desired motion, lighting, and tone. Example: “Slow cinematic zoom on a magical fox reading under warm lamp light.”

  3. Choose resolution (480p or 720p).

  4. Adjust parameters:

    • stylize for artistry
    • chaos and weird for creativity
    • quality for detail
  5. Click Run to generate your video.

  6. Download and preview the animated result.


🧠 Understanding — stylize

The stylize parameter defines how strongly Midjourney’s internal aesthetic “rules” guide your output.

  • Low values (≈50–150) → lower visual coherence, rougher details, but higher prompt control. Ideal for original or hybrid concepts (“catragon,” “centaur,” etc.).

  • High values (≈250–750) → smoother, more realistic images, but weaker prompt adherence. MJ leans toward dataset beauty rather than creative deviation.

  • Pro tip: for both coherence and control, use strong visual prompts and mention your style at both the beginning and end, e.g.:

    “Salvador Dali painting of surreal architecture, by Salvador Dali —stylize 250”

Midjourney has confirmed: the higher the --stylize value, the closer results align with the average user’s preferred visual aesthetic.


💡 Quick Presets

GoalSuggested Settings
Precise control / faithful to promptstylize=100, chaos=0, weird=0
Visually beautiful / cohesivestylize=300, chaos=0, weird=0
Creative explorationstylize=150, chaos=20–40, weird=100–400

🧩 Tips for Best Results

  1. Keep prompts visual and descriptive (use lighting, materials, mood, and camera terms).
  2. Lower stylize to make Midjourney listen more closely to your wording.
  3. Raise stylize for a more polished or realistic look.
  4. Avoid high chaos or weird unless experimenting with concept art.
  5. Reuse the same seed to refine results iteratively.

🧩 Notes

  • Motion style depends heavily on prompt clarity and the original image composition.
  • Extreme weird values can produce abstract deformation effects — experiment for creativity.
  • For consistent results, fix the seed parameter across multiple runs.

Authentication

For authentication details, please refer to the Authentication Guide.

API Endpoints

Submit Task & Query Result


# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/midjourney/image-to-video" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
    "resolution": "480p",
    "aspect_ratio": "1:1",
    "motion": "low",
    "quality": 1,
    "stylize": 0,
    "chaos": 0,
    "weird": 0,
    "seed": -1,
    "enable_base64_output": false
}'

# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"

Parameters

Task Submission Parameters

Request Parameters

ParameterTypeRequiredDefaultRangeDescription
imagestringYes-The image for generating the output.
promptstringYes-The text prompt describing the image you want to generate.
last_imagestringNo--The last image for generating the output.
resolutionstringNo480p480p, 720pThe resolution of the generated media.
aspect_ratiostringNo1:11:1, 4:3, 3:4, 2:3, 16:9, 1:2The aspect ratio of the generated media.
motionstringNolowlow, highThe motion of the generated media.
qualitynumberNo10.25, 0.5, 1, 2Use the quality parameter to control image detail and processing time.
stylizeintegerNo-0 ~ 1000Use the stylize parameter to control the artistic style in the image (0-1000).
chaosintegerNo-0 ~ 100Use the chaos parameter to add variety to your image results (0-100). Higher values produce more unusual and unexpected results.
weirdintegerNo-0 ~ 3000Use the weird parameter to make your images quirky and unconventional (0-3000).
seedintegerNo-1-1 ~ 2147483647Use the seed parameter for testing and experimentation. Use the same seed and prompt to get similar results.
enable_base64_outputbooleanNofalse-The random seed to use for the generation.

Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
data.idstringUnique identifier for the prediction, Task Id
data.modelstringModel ID used for the prediction
data.outputsarrayArray of URLs to the generated content (empty when status is not completed)
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.has_nsfw_contentsarrayArray of boolean values indicating NSFW detection for each output
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds

Result Request Parameters

ParameterTypeRequiredDefaultDescription
idstringYes-Task ID

Result Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
dataobjectThe prediction data object containing all details
data.idstringUnique identifier for the prediction, the ID of the prediction to get
data.modelstringModel ID used for the prediction
data.outputsstringArray of URLs to the generated content (empty when status is not completed).
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds
© 2025 WaveSpeedAI. All rights reserved.