Browse ModelsPruna AIPruna AI P Image Edit LoRA

Pruna Ai P Image Edit LoRA

Pruna Ai P Image Edit LoRA

Playground

Try it on WavespeedAI!

Pruna AI P-Image Edit LORA is a fast AI image editing model that edits and transforms images with LORA-based customization. Ready-to-use REST inference API for text-guided image editing, style changes, character consistency, product image updates, marketing assets, and custom AI editing workflows with simple integration, no coldstarts, and affordable pricing.

Features

Pruna AI P-Image Edit LoRA

Pruna AI P-Image Edit LoRA edits one or more input images using a natural-language instruction, with optional LoRA guidance for stronger style or edit control. It is designed for workflows where you want prompt-based image editing together with a LoRA trained specifically for the Pruna p-image-edit-lora pipeline.


Why Choose This?

  • LoRA-guided image editing Edit images with natural-language instructions while steering the result with a compatible LoRA.

  • Multi-image reference support Use one to five input images to guide appearance, structure, composition, or scene transformation.

  • Edit-specific LoRA control Apply lora_weights and tune lora_scale for stronger stylistic or transformation control.

  • Flexible aspect ratio handling Use match_input_image to follow the first input image by default, or select a preset aspect ratio when needed.

  • Private LoRA support Use hf_api_token when accessing a private or gated Hugging Face LoRA repository.

  • Simple fixed pricing Each run uses a flat per-image price.


Parameters

ParameterRequiredDescription
promptYesText instruction describing the desired edit.
imagesYesOne to five reference images used for the edit. When using multiple images, describe their roles clearly in the prompt.
lora_weightsNoOptional Hugging Face LoRA path, such as huggingface.co/PrunaAI/p-image-edit-next-scene-lora/weights.safetensors. The LoRA should be trained for p-image-edit-lora.
lora_scaleNoLoRA strength. Default: 0.5. Official range: -1 to 3.
hf_api_tokenNoOptional Hugging Face token for private or gated LoRA repositories.
aspect_ratioNoOutput aspect ratio. Default: match_input_image, which follows the first input image. Other supported values: 1:1, 16:9, 9:16, 4:3, 3:4, 3:2, and 2:3.
output_formatNoOutput image format: png, jpeg, or webp.
seedNoRandom seed. Use -1 for random generation.

How to Use

  1. Upload your reference images — provide one to five images you want to use for the edit.
  2. Write your prompt — describe what should change and what should remain unchanged.
  3. Add a LoRA (optional) — provide lora_weights if you want LoRA-guided editing.
  4. Adjust LoRA strength (optional) — set lora_scale to control how strongly the LoRA affects the result.
  5. Choose aspect ratio — keep match_input_image to follow the first input image, or select a preset ratio if needed.
  6. Add a Hugging Face token (optional) — use hf_api_token if your LoRA is private or gated.
  7. Choose output format — select png, jpeg, or webp.
  8. Set a seed (optional) — use -1 for random output, or a fixed value for more reproducible edits.
  9. Submit — run the model and download the edited image.

Example Prompt

Make this scene look like the next scene style.


Pricing

Just $0.01 per generated image.


Best Use Cases

  • Scene-to-scene style transfer — Rework one scene to match the visual style of another.
  • Character-consistent editing — Preserve identity while changing mood, styling, or composition.
  • Reference-guided transformations — Use multiple images and a LoRA for tighter edit control.
  • Creative adaptation — Turn existing images into new variants for storytelling, design, or campaign work.
  • Edit-specific LoRA workflows — Apply LoRAs trained for editing rather than text-to-image generation.

Pro Tips

  • Use a LoRA trained specifically for p-image-edit-lora for best compatibility.
  • Text-to-image LoRAs should be used with p-image-lora, not this model.
  • When using multiple images, explain each image’s role clearly in the prompt.
  • Use match_input_image when you want to preserve the framing of the first input image.
  • Adjust lora_scale gradually to balance prompt influence and LoRA influence.
  • If your LoRA is private or gated on Hugging Face, provide hf_api_token.
  • Reuse the same seed when you want more consistent edit iterations.

Notes

  • Both prompt and images are required.
  • images supports one to five input images.
  • lora_weights is optional.
  • LoRAs for this model should be trained for p-image-edit-lora.
  • Text-to-image LoRAs should be used with p-image-lora instead.
  • aspect_ratio defaults to match_input_image, which follows the first input image.
  • seed uses -1 for random generation.
  • The backend sends turbo=false and disables the safety checker by default in the internal mapping; these are not user-facing controls.
  • Pricing is fixed at $0.01 per generated image.

Authentication

For authentication details, please refer to the Authentication Guide.

API Endpoints

Submit Task & Query Result


# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/pruna-ai/p-image/edit-lora" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
    "lora_scale": 1,
    "aspect_ratio": "match_input_image",
    "output_format": "png",
    "seed": -1,
    "enable_sync_mode": false,
    "enable_base64_output": false
}'

# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"

Parameters

Task Submission Parameters

Request Parameters

ParameterTypeRequiredDefaultRangeDescription
promptstringYes-Edit instruction.
imagesarrayYes[]1 ~ 5 itemsReference image URLs. Upload 1 to 5 images.
lora_weightsstringNo--LoRA weights in Hugging Face format, for example huggingface.co/PrunaAI/p-image-edit-next-scene-lora/weights.safetensors. Use LoRAs trained for p-image-edit-lora.
lora_scalenumberNo1-1.00 ~ 3.00Scale of the LoRA weights.
hf_api_tokenstringNo--Optional Hugging Face API token used to access private or gated LoRA weights.
aspect_ratiostringNomatch_input_imagematch_input_image, 1:1, 16:9, 9:16, 4:3, 3:4, 3:2, 2:3Aspect ratio of the generated image. match_input_image follows the first input image.
output_formatstringNopngpng, jpeg, webpOutput image format.
seedintegerNo-1-1 ~ 2147483647Random seed. -1 means random.
enable_sync_modebooleanNofalse-If set to true, the function will wait for the result to be generated and uploaded before returning the response. It allows you to get the result directly in the response. This property is only available through the API.
enable_base64_outputbooleanNofalse-If enabled, the output will be encoded into a BASE64 string instead of a URL. This property is only available through the API.

Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
data.idstringUnique identifier for the prediction, Task Id
data.modelstringModel ID used for the prediction
data.outputsarrayArray of URLs to the generated content (empty when status is not completed)
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds

Result Request Parameters

ParameterTypeRequiredDefaultDescription
idstringYes-Task ID

Result Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
dataobjectThe prediction data object containing all details
data.idstringUnique identifier for the prediction, the ID of the prediction to get
data.modelstringModel ID used for the prediction
data.outputsstringArray of URLs to the generated content.
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds
© 2025 WaveSpeedAI. All rights reserved.