Openai Whisper
Playground
Try it on WavespeedAI!Whisper Large v3 speech-to-text: instant, accurate multilingual transcripts with automatic language detection and punctuation. Upload audio to get transcripts. Ready-to-use REST API, no coldstarts, affordable pricing.
Features
OpenAI Whisper (Large-v3)
OpenAI Whisper (Large-v3) is a production-ready speech recognition model that transcribes or translates audio into clean, readable text. With support for dozens of languages, optional word-level timestamps, and flexible task modes — it’s ideal for subtitling, transcription, and multilingual workflows.
Why It Stands Out
- Transcribe or translate: Choose between same-language transcription or translation to English.
- Multilingual support: Transcribe audio in dozens of languages with automatic language detection.
- Word-level timestamps: Generate precise timing data for subtitle alignment and editing workflows.
- Prompt-guided output: Steer formatting, terminology, or punctuation with custom prompts.
- Prompt Enhancer: Built-in AI-powered prompt optimization for better transcription guidance.
- Flexible input: Supports direct audio upload or public URL.
Parameters
| Parameter | Required | Description |
|---|---|---|
| audio | Yes | Upload or link to an audio file (MP3 / WAV / M4A, etc.). |
| language | No | Language code for transcription; use “auto” for automatic detection. |
| task | No | Choose “transcribe” for same-language or “translate” for English output. |
| enable_timestamps | No | Generate word-level timestamps (may increase processing time). |
| prompt | No | Short guidance text to steer transcription style or terminology. |
| enable_sync_mode | No | Wait for result before returning response (API only). |
How to Use
- Upload your audio — drag and drop a file or paste a public URL.
- Select language — choose a specific language or use “auto” for detection.
- Choose task — select “transcribe” for same-language output or “translate” for English.
- Enable timestamps (optional) — turn on for word-level timing data.
- Add a prompt (optional) — guide formatting, punctuation, or specific terminology.
- Click Run and wait for transcription to complete.
- Copy or download the transcribed text.
Best Use Cases
- Subtitle Generation — Create accurate, timed transcripts for video subtitling.
- Meeting Transcription — Convert recorded meetings, interviews, and calls into searchable text.
- Translation Workflows — Translate foreign-language audio directly to English text.
- Content Repurposing — Turn podcasts, webinars, and lectures into written content.
- Accessibility — Generate transcripts to make audio content accessible.
Pricing
| Mode | Price |
|---|---|
| Standard (no timestamps) | $0.001 / s |
| With timestamps enabled | $0.002 / s |
Total cost = duration of audio (in seconds) × price per second
Examples
- 60s audio (standard) → 60 × $0.001 = $0.06
- 60s audio (with timestamps) → 60 × $0.002 = $0.12
- 10 min (600s) audio (standard) → 600 × $0.001 = $0.60
- 10 min (600s) audio (with timestamps) → 600 × $0.002 = $1.20
Pro Tips for Best Quality
- Use clear audio with minimal background noise for optimal accuracy.
- Specify the language manually if auto-detection is inconsistent.
- Enable timestamps only when needed for subtitles or alignment — it doubles the cost.
- Add a prompt to guide transcription — include names, jargon, or formatting preferences.
- Use “translate” task for non-English audio when you need English output.
Notes
- Ensure uploaded audio URLs are publicly accessible.
- Timestamps are best for subtitles and editing, but may take longer to process.
- Processing time varies based on audio duration and current queue load.
Authentication
For authentication details, please refer to the Authentication Guide.
API Endpoints
Submit Task & Query Result
# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/wavespeed-ai/openai-whisper" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
"language": "auto",
"task": "transcribe",
"enable_timestamps": false,
"enable_sync_mode": true
}'
# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"
Parameters
Task Submission Parameters
Request Parameters
| Parameter | Type | Required | Default | Range | Description |
|---|---|---|---|---|---|
| audio | string | Yes | - | - | Audio file to transcribe. Provide an HTTPS URL or upload a file (MP3, WAV, FLAC up to 60 minutes). |
| language | string | No | auto | auto, af, am, ar, as, az, ba, be, bg, bn, bo, br, bs, ca, cs, cy, da, de, el, en, es, et, eu, fa, fi, fo, fr, gl, gu, ha, haw, he, hi, hr, ht, hu, hy, id, is, it, ja, jw, ka, kk, km, kn, ko, la, lb, ln, lo, lt, lv, mg, mi, mk, ml, mn, mr, ms, mt, my, ne, nl, nn, no, oc, pa, pl, ps, pt, ro, ru, sa, sd, si, sk, sl, sn, so, sq, sr, su, sv, sw, ta, te, tg, th, tk, tl, tr, tt, uk, ur, uz, vi, yi, yo, zh, yue | Language spoken in the audio. Set to 'auto' for automatic language detection (default). |
| task | string | No | transcribe | transcribe, translate | The task to perform. 'transcribe' to the source language or 'translate' to English. |
| enable_timestamps | boolean | No | false | - | Enable to generate word-level timestamps for the transcription. Note: This may increase processing time. |
| prompt | string | No | - | An optional text to provide as a prompt to guide the model's style or continue a previous audio segment. The prompt should be in the same language as the audio. | |
| enable_sync_mode | boolean | No | true | - | If set to true, the function will wait for the result to be generated and uploaded before returning the response. It allows you to get the result directly in the response. This property is only available through the API. |
Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data.id | string | Unique identifier for the prediction, Task Id |
| data.model | string | Model ID used for the prediction |
| data.outputs | array | Array of URLs to the generated content (empty when status is not completed) |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.has_nsfw_contents | array | Array of boolean values indicating NSFW detection for each output |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |
Result Request Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| id | string | Yes | - | Task ID |
Result Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data | object | The prediction data object containing all details |
| data.id | string | Unique identifier for the prediction, the ID of the prediction to get |
| data.model | string | Model ID used for the prediction |
| data.outputs | string | Object containing URLs to the generated content (empty when status is not completed). |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |