LTX-2 to LTX-2.3 Upgrade: Compatibility, LoRA Breaks & Migration (2026)
Hey, I’m Dora. I didn’t plan an upgrade week. I just wanted to rerun an old prompt from a client deck. Same seed, same settings, supposedly “the same” model, except it wasn’t. LTX-2.3 landed in my folder, and the images felt a notch cleaner, a touch more literal… and my LoRA-enhanced style was gone. That tiny mismatch sent me down the rabbit hole. Over a few days in March 2026, I tested the ltx-2 to ltx-2.3 upgrade across my usual ComfyUI pipelines and the managed API I lean on for batch work. Here’s what actually shifted, what stayed steady, and where the friction shows up when you’re doing real work, not demos.
What Actually Changed Between LTX-2 and 2.3
I’ll skip the marketing claims and stick to the parts that touched my existing workflows.
- Prompts resolve more literally. I noticed 2.3 follows positional hints (“left/right”, “foreground/background”) more consistently. Nice for product layouts: a bit rigid for art-y prompts that relied on LTX-2’s looseness.
- Contrast and saturation skew higher by default. My neutral lighting presets came out punchier in 2.3. I dialed guidance down ~0.5–1.0 and lowered contrast in post less often.
- Seeds aren’t 1:1 between versions. Even with the same seed, LTX-2.3 diverged after ~10–12 steps in my runs. If you need pixel-stable re-renders from old jobs, don’t bank on it.
- Aspect ratio handling is saner. 2.3 respected off-square sizes (e.g., 1024×1536) with fewer warped elements. I could drop some of the canvas hacks I used on LTX-2.
- Sampler defaults changed. The recommended scheduler in 2.3 (and its step curve) pushed me to fewer steps for the same detail. My sweet spot moved from ~28–32 steps to ~22–26. Throughput improved a bit on the same GPU.
None of this is dramatic. But it’s enough to bend a pipeline in small, sometimes welcome ways… and to break anything that leans on exact reproducibility, especially LoRAs.

Model Size Reality Check: Local Deployment Implications
I ran both versions on a 24 GB 4090 and an 8 GB laptop GPU. This is the part I wish release notes emphasized more: the practical ceiling of what your card can hold and still breathe.
VRAM and Storage Comparison (dev / fp8 / distilled across both versions)
Here’s what I observed and what mattered in practice:
- Dev/full checkpoints: On the 4090, both LTX-2 and LTX-2.3 “dev” builds loaded, but 2.3 sat a bit heavier in VRAM at runtime (roughly +0.5–1.2 GB in my runs with the same sampler/steps). If you are tight on headroom for high-res generations, that margin matters. On the 8 GB card, full dev builds weren’t realistic without offloading.
- FP8/quantized variants: The fp8 2.3 build freed ~25–35% VRAM versus full precision in my tests, at the cost of slightly more brittle fine detail when pushing very low steps. For everyday 1K outputs, I didn’t mind. If you composite or crop aggressively, you might. For the practical advantages of FP8 quantization in deployment, I referred to NVIDIA’s official guide on efficient lower-precision AI training.
- Distilled: The 2.3 distilled checkpoint behaved like a practical middle ground. Lower storage footprint, noticeably faster warm starts, small trade-off in edge micro-detail. For social-ready images and internal docs, I’d pick distilled 2.3 over full 2.0.
- Disk footprint: Expect a mild bump for 2.3 variants versus 2.0. Not huge, but I needed to prune old experimental LoRAs to keep the scratch drive clean.
A small note from the trenches: once VRAM headroom dipped under ~2 GB remaining, I saw occasional OOMs during tiled high-res passes with 2.3. Lowering the tiling overlap or using fp8 stabilized it.
ComfyUI Workflow Compatibility: What Still Works, What Needs Updating
I kept my ComfyUI setup mostly intact and swapped checkpoints. I mainly referred to the official ComfyUI repository to ensure workflow compatibility during my tests.
What still worked smoothly:
- Basic text-to-image graphs with conditioning → sampler → VAE decode. I could swap in the 2.3 loader and render without rebuilding the graph.
- Common samplers (e.g., DPM++ families) ran fine. I only tweaked steps and guidance to match the new curve.
- High-res workflows with latent upscalers still worked, though I shortened second-stage steps by ~20% without losing detail.
What needed updates:
- LoRA injection nodes: My LTX-2 LoRAs didn’t attach cleanly to 2.3. Even when the node let me connect, results were off, style veered or collapsed. More below.
- Checkpoint paths and formats: The 2.3 checkpoints I tested shipped with different folder names and a slightly different config reference. I had to refresh the Checkpoint Loader node paths and confirm the VAE pairing.
- Parameter defaults: My old “house” presets (CFG 6.5, steps ~30) produced harsher contrast on 2.3. Dropping CFG to ~5.5 and steps to ~24 brought back the balance I liked.
- Negative prompts: I relied less on long negative lists. 2.3 seemed to avoid certain artifacts natively (hands improved a bit in my product poses). I trimmed negatives to cut prompt overhead.

Node Changes, Checkpoint Paths, and Parameter Differences
- Node changes: I didn’t need new custom nodes for core generation, but I did update my model loader node to a newer ComfyUI build to avoid metadata mismatches. If you’re a few months behind on ComfyUI, update first, it saves odd errors.
- Checkpoint paths: Keep 2.0 and 2.3 folders separate. I use a clear naming scheme (model_name/version/precision) so batch jobs don’t grab the wrong file.
- Parameter differences: 2.3 seemed more sensitive to CFG swings. Small changes (~0.5) had a bigger visual impact than on 2.0. Also, fewer steps gave similar detail: going beyond ~26 on 1K images returned diminishing returns in my tests.
LoRA Compatibility: Why Existing LoRAs Won’t Transfer Directly
This was the biggest surprise, and the most expensive if you’ve built a style library on LTX-2.
My LTX-2 LoRAs didn’t meaningfully carry over. The short version: base model shifts (embedding space, attention blocks, sometimes normalization and VAE changes) mean the learned deltas don’t map cleanly. You can force it, but you’ll fight weird color casts, shape drift, or the dreaded “everything turns into beige plastic.” If your style heavily relies on LoRAs, it is recommended to treat LTX-2.3 as a new base model and retrain by following the official Hugging Face LoRA training guide
From a practical angle: if your look depends on LoRAs, expect to treat LTX-2.3 as a new base and retrain.
What You Need to Retrain and Estimated Cost
What I retained:
- Dataset: I reused my cleaned, captioned set (around 300–800 images per style, depending on variety). Better captions helped more on 2.3 than raw volume.
- Settings: Lower learning rates than I used for 2.0 to avoid overbaking saturation. Rank/dim stayed similar, but I nudged training steps down by ~10–15%.
- Validation: I validated every few hundred steps with the new base prompts, not the legacy ones. Old prompts biased me toward the wrong targets.
Costs, in rough, human terms:
- Time: About 3–5 hours per LoRA on a single 4090 for mid-sized sets, including validation and small restarts. Distilled 2.3 bases trained hair faster.
- Cloud: If you rent, budget $0.80–$1.60/hour for a 24 GB class GPU as of March 2026. That puts a single clean retrain in the $3–$10 range, plus your time. Obviously, bigger sets and more experimentation push it up.
It didn’t save me time at first. But after two or three runs, my 2.3 LoRAs needed fewer guardrails in prompts, which reduced mental effort on future batches.

API Users: Endpoint and Parameter Differences to Watch
For managed APIs, the ltx-2.3 vs ltx-2 differences were small but consequential:
- Versioned models: 2.3 often sits behind an explicit model or version parameter. If you rely on “latest,” lock it to 2.0 until you finish testing.
- Defaults moved: Guidance, step counts, and safety levels shifted in my provider. My LTX-2 presets produced higher-contrast images on 2.3 until I lowered CFG by ~10–15%.
- Seed types: One API moved seeds from 32-bit to 64-bit ints with 2.3. Harmless, except my old wrapper typed seeds as strings. It silently ignored them.
- Negative prompts and weight syntax: Check tokenizer/weight formatting. One provider tightened parsing: my old “(keyword:1.2)” syntax needed spacing to register.
- Rate limits and batching: 2.3 ran slightly faster per request in my queues, but batch concurrency caps didn’t change. I staggered jobs to avoid short spikes.
If you’re in doubt, skim your provider’s release notes and test the same prompt/seed across versions. Expect similar composition, not identical pixels.
When Staying on LTX-2 Still Makes Sense
I like new toys as much as anyone, but I don’t rebuild working systems without a reason. I stayed on LTX-2 for a few projects because:
- You need strict reproducibility. Same seed, same pixels, for audits, regulated workflows, or client sign-offs tied to a past model version.
- You have heavy LoRA investments. If your library is deep and diverse, the retrain cost (time, attention, not just dollars) adds up.
- Edge or low-VRAM constraints. If your 8 GB machine barely holds the 2.0 stack, 2.3’s extra headroom needs may tip you into offloading.
- Team training cost. If prompts and presets are baked into docs and tutorials, 2.3 will force tiny but cumulative changes. Death by a thousand cuts is real.
On the flip side, if you’re starting fresh or you like tighter prompt adherence out of the box, 2.3 felt nicer to steer.

Upgrade Decision Checklist(ComfyUI / managed API)
Here’s what I actually walked through before switching a pipeline.
ComfyUI
- Duplicate the graph and swap in LTX-2.3 with a clean loader node. Don’t overwrite your 2.0 path.
- Refind your step/CFG pair. Start at ~80% of your old steps and drop CFG by 0.5–1.0.
- Validate seeds across 5–10 prompts you care about. Accept composition similarity, not pixel identity.
- Check high-res/tiling stages for OOMs. If tight, try fp8 or lower overlap.
- Disable LoRAs, then re-enable one at a time. If it misbehaves, plan a retrain instead of hacking weights.
- Update any negat Dive prompt templates. Trim if results look cleaner: don’t carry baggage.
Managed API
- Pin the model version explicitly during testing.
- Recreate your preset with lowered CFG and steps, then compare output/latency.
- Confirm seed handling (bit width, types) in the docs.
- Check safety flags and content filters: you may need to relax or raise thresholds.
- Run a small batch side-by-side (2.0 vs 2.3) and have a human pick winners for your use case. Trust eyeballs over metrics here.
If most boxes stay green after a day of light testing, I upgrade. If two or more need duct tape, I wait.
FAQ
Do LTX-2 LoRAs work on LTX-2.3 without retraining?
In my tests, not reliably. The base model changes enough that styles drift or collapse. You might squeeze passable results with very gentle weights, but it’s fragile. Treat 2.3 as a new base and plan a fresh LoRA pass.
Can LTX-2 and LTX-2.3 checkpoints coexist in the same ComfyUI setup?
Yes. Keep them in separate folders, update your Checkpoint Loader node paths, and version-name your presets. I also tag outputs with the model in the filename so old images don’t get mixed in. It’s mundane, but it saves you the future.
I’ll end on a small note: the first 2.3 image that made me pause was a simple product-on-shelf shot. The shelf lines were finally straight. Not dramatic, just one less thing to fix later. That’s usually how good upgrades feel.
Previous Posts:
- See what actually changed in LTX-2.3 and where it improves real outputs
- Learn how to use LTX-2.3 API endpoints effectively in production workflows
- Compare WAN 2.7 vs WAN 2.6 before deciding your next model upgrade
- Understand first/last frame control for more stable video generation
- Explore WAN 2.7 API setup for building scalable video pipelines





