Openai Whisper

Openai Whisper

Playground

Try it on WavespeedAI!

Whisper Large v3 speech-to-text: instant, accurate multilingual transcripts with automatic language detection and punctuation. Upload audio to get transcripts. Ready-to-use REST API, no coldstarts, affordable pricing.

Features

OpenAI Whisper (Large-v3)

OpenAI Whisper (Large-v3) is a production-ready speech recognition model that transcribes or translates audio into clean, readable text. With support for dozens of languages, optional word-level timestamps, and flexible task modes — it’s ideal for subtitling, transcription, and multilingual workflows.


Why It Stands Out

  • Transcribe or translate: Choose between same-language transcription or translation to English.
  • Multilingual support: Transcribe audio in dozens of languages with automatic language detection.
  • Word-level timestamps: Generate precise timing data for subtitle alignment and editing workflows.
  • Prompt-guided output: Steer formatting, terminology, or punctuation with custom prompts.
  • Prompt Enhancer: Built-in AI-powered prompt optimization for better transcription guidance.
  • Flexible input: Supports direct audio upload or public URL.

Parameters

ParameterRequiredDescription
audioYesUpload or link to an audio file (MP3 / WAV / M4A, etc.).
languageNoLanguage code for transcription; use “auto” for automatic detection.
taskNoChoose “transcribe” for same-language or “translate” for English output.
enable_timestampsNoGenerate word-level timestamps (may increase processing time).
promptNoShort guidance text to steer transcription style or terminology.
enable_sync_modeNoWait for result before returning response (API only).

How to Use

  1. Upload your audio — drag and drop a file or paste a public URL.
  2. Select language — choose a specific language or use “auto” for detection.
  3. Choose task — select “transcribe” for same-language output or “translate” for English.
  4. Enable timestamps (optional) — turn on for word-level timing data.
  5. Add a prompt (optional) — guide formatting, punctuation, or specific terminology.
  6. Click Run and wait for transcription to complete.
  7. Copy or download the transcribed text.

Best Use Cases

  • Subtitle Generation — Create accurate, timed transcripts for video subtitling.
  • Meeting Transcription — Convert recorded meetings, interviews, and calls into searchable text.
  • Translation Workflows — Translate foreign-language audio directly to English text.
  • Content Repurposing — Turn podcasts, webinars, and lectures into written content.
  • Accessibility — Generate transcripts to make audio content accessible.

Pricing

ModePrice
Standard (no timestamps)$0.001 / s
With timestamps enabled$0.002 / s

Total cost = duration of audio (in seconds) × price per second

Examples

  • 60s audio (standard) → 60 × $0.001 = $0.06
  • 60s audio (with timestamps) → 60 × $0.002 = $0.12
  • 10 min (600s) audio (standard) → 600 × $0.001 = $0.60
  • 10 min (600s) audio (with timestamps) → 600 × $0.002 = $1.20

Pro Tips for Best Quality

  • Use clear audio with minimal background noise for optimal accuracy.
  • Specify the language manually if auto-detection is inconsistent.
  • Enable timestamps only when needed for subtitles or alignment — it doubles the cost.
  • Add a prompt to guide transcription — include names, jargon, or formatting preferences.
  • Use “translate” task for non-English audio when you need English output.

Notes

  • Ensure uploaded audio URLs are publicly accessible.
  • Timestamps are best for subtitles and editing, but may take longer to process.
  • Processing time varies based on audio duration and current queue load.

Authentication

For authentication details, please refer to the Authentication Guide.

API Endpoints

Submit Task & Query Result


# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/wavespeed-ai/openai-whisper" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
    "language": "auto",
    "task": "transcribe",
    "enable_timestamps": false,
    "enable_sync_mode": true
}'

# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"

Parameters

Task Submission Parameters

Request Parameters

ParameterTypeRequiredDefaultRangeDescription
audiostringYes--Audio file to transcribe. Provide an HTTPS URL or upload a file (MP3, WAV, FLAC up to 60 minutes).
languagestringNoautoauto, af, am, ar, as, az, ba, be, bg, bn, bo, br, bs, ca, cs, cy, da, de, el, en, es, et, eu, fa, fi, fo, fr, gl, gu, ha, haw, he, hi, hr, ht, hu, hy, id, is, it, ja, jw, ka, kk, km, kn, ko, la, lb, ln, lo, lt, lv, mg, mi, mk, ml, mn, mr, ms, mt, my, ne, nl, nn, no, oc, pa, pl, ps, pt, ro, ru, sa, sd, si, sk, sl, sn, so, sq, sr, su, sv, sw, ta, te, tg, th, tk, tl, tr, tt, uk, ur, uz, vi, yi, yo, zh, yueLanguage spoken in the audio. Set to 'auto' for automatic language detection (default).
taskstringNotranscribetranscribe, translateThe task to perform. 'transcribe' to the source language or 'translate' to English.
enable_timestampsbooleanNofalse-Enable to generate word-level timestamps for the transcription. Note: This may increase processing time.
promptstringNo-An optional text to provide as a prompt to guide the model's style or continue a previous audio segment. The prompt should be in the same language as the audio.
enable_sync_modebooleanNotrue-If set to true, the function will wait for the result to be generated and uploaded before returning the response. It allows you to get the result directly in the response. This property is only available through the API.

Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
data.idstringUnique identifier for the prediction, Task Id
data.modelstringModel ID used for the prediction
data.outputsarrayArray of URLs to the generated content (empty when status is not completed)
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.has_nsfw_contentsarrayArray of boolean values indicating NSFW detection for each output
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds

Result Request Parameters

ParameterTypeRequiredDefaultDescription
idstringYes-Task ID

Result Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
dataobjectThe prediction data object containing all details
data.idstringUnique identifier for the prediction, the ID of the prediction to get
data.modelstringModel ID used for the prediction
data.outputsstringObject containing URLs to the generated content (empty when status is not completed).
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds
© 2025 WaveSpeedAI. All rights reserved.