Pruna Ai P Image Edit LoRA
Playground
Try it on WavespeedAI!Pruna AI P-Image Edit LORA is a fast AI image editing model that edits and transforms images with LORA-based customization. Ready-to-use REST inference API for text-guided image editing, style changes, character consistency, product image updates, marketing assets, and custom AI editing workflows with simple integration, no coldstarts, and affordable pricing.
Features
Pruna AI P-Image Edit LoRA
Pruna AI P-Image Edit LoRA edits one or more input images using a natural-language instruction, with optional LoRA guidance for stronger style or edit control. It is designed for workflows where you want prompt-based image editing together with a LoRA trained specifically for the Pruna p-image-edit-lora pipeline.
Why Choose This?
-
LoRA-guided image editing Edit images with natural-language instructions while steering the result with a compatible LoRA.
-
Multi-image reference support Use one to five input images to guide appearance, structure, composition, or scene transformation.
-
Edit-specific LoRA control Apply
lora_weightsand tunelora_scalefor stronger stylistic or transformation control. -
Flexible aspect ratio handling Use
match_input_imageto follow the first input image by default, or select a preset aspect ratio when needed. -
Private LoRA support Use
hf_api_tokenwhen accessing a private or gated Hugging Face LoRA repository. -
Simple fixed pricing Each run uses a flat per-image price.
Parameters
| Parameter | Required | Description |
|---|---|---|
| prompt | Yes | Text instruction describing the desired edit. |
| images | Yes | One to five reference images used for the edit. When using multiple images, describe their roles clearly in the prompt. |
| lora_weights | No | Optional Hugging Face LoRA path, such as huggingface.co/PrunaAI/p-image-edit-next-scene-lora/weights.safetensors. The LoRA should be trained for p-image-edit-lora. |
| lora_scale | No | LoRA strength. Default: 0.5. Official range: -1 to 3. |
| hf_api_token | No | Optional Hugging Face token for private or gated LoRA repositories. |
| aspect_ratio | No | Output aspect ratio. Default: match_input_image, which follows the first input image. Other supported values: 1:1, 16:9, 9:16, 4:3, 3:4, 3:2, and 2:3. |
| output_format | No | Output image format: png, jpeg, or webp. |
| seed | No | Random seed. Use -1 for random generation. |
How to Use
- Upload your reference images — provide one to five images you want to use for the edit.
- Write your prompt — describe what should change and what should remain unchanged.
- Add a LoRA (optional) — provide
lora_weightsif you want LoRA-guided editing. - Adjust LoRA strength (optional) — set
lora_scaleto control how strongly the LoRA affects the result. - Choose aspect ratio — keep
match_input_imageto follow the first input image, or select a preset ratio if needed. - Add a Hugging Face token (optional) — use
hf_api_tokenif your LoRA is private or gated. - Choose output format — select
png,jpeg, orwebp. - Set a seed (optional) — use
-1for random output, or a fixed value for more reproducible edits. - Submit — run the model and download the edited image.
Example Prompt
Make this scene look like the next scene style.
Pricing
Just $0.01 per generated image.
Best Use Cases
- Scene-to-scene style transfer — Rework one scene to match the visual style of another.
- Character-consistent editing — Preserve identity while changing mood, styling, or composition.
- Reference-guided transformations — Use multiple images and a LoRA for tighter edit control.
- Creative adaptation — Turn existing images into new variants for storytelling, design, or campaign work.
- Edit-specific LoRA workflows — Apply LoRAs trained for editing rather than text-to-image generation.
Pro Tips
- Use a LoRA trained specifically for p-image-edit-lora for best compatibility.
- Text-to-image LoRAs should be used with p-image-lora, not this model.
- When using multiple images, explain each image’s role clearly in the prompt.
- Use
match_input_imagewhen you want to preserve the framing of the first input image. - Adjust
lora_scalegradually to balance prompt influence and LoRA influence. - If your LoRA is private or gated on Hugging Face, provide
hf_api_token. - Reuse the same
seedwhen you want more consistent edit iterations.
Notes
- Both
promptandimagesare required. imagessupports one to five input images.lora_weightsis optional.- LoRAs for this model should be trained for p-image-edit-lora.
- Text-to-image LoRAs should be used with p-image-lora instead.
aspect_ratiodefaults tomatch_input_image, which follows the first input image.seeduses-1for random generation.- The backend sends
turbo=falseand disables the safety checker by default in the internal mapping; these are not user-facing controls. - Pricing is fixed at $0.01 per generated image.
Related Models
- Pruna AI P-Image Text-to-Image — Generate images directly from prompts.
- Pruna AI P-Image Text-to-Image LoRA — Generate images from prompts with LoRA style control.
- Pruna AI P-Image Edit — Edit images with natural-language instructions without a LoRA.
- Pruna AI P-Image Upscale — Enhance and enlarge existing images.
Authentication
For authentication details, please refer to the Authentication Guide.
API Endpoints
Submit Task & Query Result
# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/pruna-ai/p-image/edit-lora" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
"lora_scale": 1,
"aspect_ratio": "match_input_image",
"output_format": "png",
"seed": -1,
"enable_sync_mode": false,
"enable_base64_output": false
}'
# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"
Parameters
Task Submission Parameters
Request Parameters
| Parameter | Type | Required | Default | Range | Description |
|---|---|---|---|---|---|
| prompt | string | Yes | - | Edit instruction. | |
| images | array | Yes | [] | 1 ~ 5 items | Reference image URLs. Upload 1 to 5 images. |
| lora_weights | string | No | - | - | LoRA weights in Hugging Face format, for example huggingface.co/PrunaAI/p-image-edit-next-scene-lora/weights.safetensors. Use LoRAs trained for p-image-edit-lora. |
| lora_scale | number | No | 1 | -1.00 ~ 3.00 | Scale of the LoRA weights. |
| hf_api_token | string | No | - | - | Optional Hugging Face API token used to access private or gated LoRA weights. |
| aspect_ratio | string | No | match_input_image | match_input_image, 1:1, 16:9, 9:16, 4:3, 3:4, 3:2, 2:3 | Aspect ratio of the generated image. match_input_image follows the first input image. |
| output_format | string | No | png | png, jpeg, webp | Output image format. |
| seed | integer | No | -1 | -1 ~ 2147483647 | Random seed. -1 means random. |
| enable_sync_mode | boolean | No | false | - | If set to true, the function will wait for the result to be generated and uploaded before returning the response. It allows you to get the result directly in the response. This property is only available through the API. |
| enable_base64_output | boolean | No | false | - | If enabled, the output will be encoded into a BASE64 string instead of a URL. This property is only available through the API. |
Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data.id | string | Unique identifier for the prediction, Task Id |
| data.model | string | Model ID used for the prediction |
| data.outputs | array | Array of URLs to the generated content (empty when status is not completed) |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |
Result Request Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| id | string | Yes | - | Task ID |
Result Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data | object | The prediction data object containing all details |
| data.id | string | Unique identifier for the prediction, the ID of the prediction to get |
| data.model | string | Model ID used for the prediction |
| data.outputs | string | Array of URLs to the generated content. |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |