LTX-2 ComfyUI Quickstart: First Video in 10 Minutes (Day-0 Native Support)

LTX-2 ComfyUI Quickstart: First Video in 10 Minutes (Day-0 Native Support)

Hi, I’m Dora. I didn’t plan to try LTX-2 in ComfyUI last weekend. I just hit a small snag in a client reel: I needed two short shots with the same look, and my usual tools kept drifting. I saw a note about “ltx-2 comfyui day-0 support,” took a breath, and gave it an evening. I tested across Jan 6–8, 2026, on my main workstation and a travel laptop.

What follows isn’t a review, just what it actually felt like to get LTX-2 running inside ComfyUI, where it helped, where it fought me, and the settings I’d reuse if I had to do it again tomorrow.


What is Day-0 Native Support (Jan 5–6, 2026 release)

Day-0 support here means LTX-2 ships with nodes and a reference workflow that work inside ComfyUI the same week the model lands. No shim scripts, no mystery forks. I pulled the changes on Jan 6 and saw:

  • Built-in nodes labeled for LTX-2 (loader, sampler, and a simple preview path).
  • A sample workflow that actually runs end-to-end without manual wiring. Not pretty, but functional.
  • Model download hooks that point to the official weights. I still had to place them in my models directory, but at least the links matched the docs.

I didn’t expect magic, and it isn’t. The first pass felt a little bare-bones: limited presets, a couple cryptic errors. But it was stable enough to render 4–6 second clips, and the color consistency between runs was better than I’m used to. For a day-0 ComfyUI drop, that’s rare.

If you care about reproducibility: seeds worked, and I could nudge motion strength without the scene collapsing. That was the small surprise that kept me exploring.


Minimum Requirements (GPU / VRAM / ComfyUI version)

This is from my own machines. Your setup may differ.

  • ComfyUI: a fresh pull as of Jan 6, 2026. Older forks gave me node import errors. If you keep a long-lived environment, consider a clean venv just for LTX-2. Follow the official ComfyUI documentation.
  • GPU/VRAM:
    • 12 GB VRAM: workable at 512×288 to 512×320, 4–5 s clips, conservative motion.
    • 16 GB VRAM: 512×512 at 4–6 s felt comfortable. Occasional OOM at aggressive motion or many steps.
    • 24 GB+ VRAM: 768×432 to 768×768 at 5–8 s ran smoothly. Good headroom for higher CFG.
  • CPU/RAM: nothing unusual. I used 32 GB system RAM: usage stayed modest.
  • Drivers/Runtime: CUDA 12.x, PyTorch build matching your CUDA. xformers helped on the 12 GB card but wasn’t mandatory on 24 GB.

If you’re on an 8 GB laptop GPU, I’d skip local and use a cloud runner for now.


5-Step First Run Workflow

Here’s the exact path I took on Jan 6. No extra flourishes.

  1. Pull ComfyUI updates and create a clean environment
    I cloned a fresh ComfyUI folder from GitHub and installed requirements from the included file. Mixing old nodes with new video models usually causes weird, silent failures. Fresh saved me an hour.

  2. Add the LTX-2 nodes and weights
    I used the bundled LTX-2 nodes that shipped with the Jan 5–6 update. For weights, I followed the links in the node tooltip and placed files under models/ltx (the suggested path). If your node can’t find weights, it’ll tell you, just double-check the exact filenames.

  3. Load the sample workflow
    The provided example got me a working baseline: text prompt in, video preview out. I changed only three things: seed, resolution, and motion strength. Starting simple kept errors obvious.

  4. Run a 2–4 second test at low resolution
    My first run was 512×320, 4 seconds, 16 frames, sampler steps at 20. It took ~50–70 seconds on a 24 GB 4090, ~2–3 minutes on a 12 GB 3060. The goal wasn’t speed: it was verifying the pipeline.

  5. Dial in prompt and motion, then extend length
    Once I liked the base look, I nudged CFG and motion strength, then pushed to 6–8 seconds. If the scene fell apart, I walked back: reduce motion, lock the seed, and keep the same resolution.

Small note: I saved every test with seed and settings in the filename. When clips start to blur together at 1 a.m., that breadcrumb trail is a gift.


These are starting points that behaved well for me. Adjust to taste.

  • Resolution

    • 512×320 or 512×512 for first looks. Stable, fast enough, easy on VRAM.
    • 768×432 when you want more detail without committing to square.
  • FPS

    • 12–16 fps for concept passes. You’ll see motion arcs without heavy render times.
    • 24 fps only when you’re confident in the look: it compounds time and memory.
  • Length

    • 4–6 seconds is the sweet spot for iteration. Past 8 seconds, the model starts to wander unless your prompt is very anchored.
  • CFG / Steps

    • Start CFG at 3–5. Higher adds contrast but can overlock the scene.
    • Steps at 18–24 were enough in most cases. I didn’t see big gains past 28.
  • Motion

    • Keep it conservative early. If you need big moves, build up in small bumps and keep the seed fixed as you adjust.

I also kept a “sane defaults” preset: 512×512, 16 fps, 6 s, CFG 4, steps 22, motion 0.6. It’s boring, but boring is dependable.


3 Copy-Paste Prompts That Work

These aren’t magic words, just prompts that produced steady, repeatable clips across Jan 6–8. I include a short setup note for each.

  1. Product macro with gentle parallax

    • Prompt:
      "a simple ceramic mug on a wooden desk, soft morning window light, shallow depth of field, slow parallax left to right, natural colors, minimal grain"
    • Notes: Keep background nouns simple. If you add brand-y language, it tends to hallucinate logos.
  2. Street scene with restrained motion

    • Prompt:
      "a quiet city street at dusk, neon signs reflecting on wet pavement, light drizzle, camera slowly dollying forward, subtle lens flare, cinematic but understated"
    • Notes: If your lights start flickering, lower motion and drop CFG by 1.
  3. Natural texture loop

    • Prompt:
      "close-up of ripples on a lake, soft breeze, warm sunset tones, camera locked with small micro-movements, gentle highlights, calm mood"
    • Notes: Great for backgrounds. The seed lock keeps it loop-adjacent if you trim tails.

Tip: If you want tighter style control, front-load tone words (“understated, natural light, clean composition”) before objects. It seemed to anchor the palette better.


Common First-Run Errors + Quick Fixes

These were my actual bumps and what fixed them.

  • CUDA out of memory at start

    • Fix: drop resolution first, then steps, then motion. If you’re on 12 GB, stay at 512 width and keep clips under 6 seconds. Close browsers eating VRAM (seriously).
  • Node not found / missing class

    • Fix: update ComfyUI to the Jan 5–6 commits: remove older custom nodes that bundle legacy video loaders, then relaunch. If the error points to a specific node name, search your custom_nodes folder for duplicates.
  • Weights file not detected

    • Fix: match the exact filename the node expects. Keep paths ASCII-only; avoid symlinks on Windows if possible. Restart ComfyUI after moving files.
  • Torch / CUDA mismatch

    • Fix: ensure your PyTorch build matches your CUDA toolkit. If you don’t care to debug, use the pip wheel suggested in the ComfyUI GitHub README for your GPU.
  • FFmpeg not found for export

    • Fix: install FFmpeg and add it to PATH. As a quick fallback, export frames and assemble with any encoder you trust.
  • Flicker or style drift mid-clip

    • Fix: reduce motion, lower CFG by 1–2, and keep prompts clean. Overdescribing tends to fight temporal consistency.

Pro tip: If nothing works and you’re on a tight deadline, render 12–16 fps and slow it to 24 fps in post with frame blending. It’s not elegant, but it’s calm in a storm.

What I liked

  • No dependency wrestling. I was testing a prompt in under five minutes.
  • Reasonable defaults. The first output wasn’t perfect, but it wasn’t chaos.

What got in the way

  • Less control than a full ComfyUI graph. I missed being able to wire custom post steps.
  • Queue time. My longest wait was ~6 minutes during peak hours.

Who it suits

  • If you’re exploring looks and don’t want to touch drivers, it’s easy.
  • If you need a reproducible pipeline with versions and seeds tracked across projects, local ComfyUI still wins.
  • If you just need to run LTX-2 immediately without setup, WaveSpeed lets you render in a browser and sanity-check prompts before committing to a local pipeline.

This worked for me; your mileage may vary. If you’re traveling or on an 8 GB GPU, it’s worth a look.


A small thought to end on: LTX-2 inside ComfyUI didn’t make me faster right away. It did make my choices clearer: fewer knobs that mattered, more that didn’t. That, more than speed, is what I notice a week later.