Browse ModelsWavespeed AIOpenai Whisper Turbo

Openai Whisper Turbo

Openai Whisper Turbo

Playground

Try it on WavespeedAI!

Accurate speech-to-text with OpenAI Whisper Large v3 Turbo: multilingual transcripts with auto language detection and punctuation. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.

Features

OpenAI Whisper Turbo

OpenAI Whisper Turbo is a fast, accurate speech-to-text transcription model powered by OpenAI’s Whisper architecture. It converts audio into clean, readable text with support for multiple languages — ideal for transcription, subtitling, and voice-driven workflows.


Why It Stands Out

  • High-speed transcription: Optimized for fast processing without sacrificing accuracy.
  • Multilingual support: Transcribe audio in dozens of languages with automatic language detection.
  • Prompt-guided transcription: Steer output formatting, terminology, or punctuation with custom prompts.
  • Prompt Enhancer: Built-in AI-powered prompt optimization for better transcription guidance.
  • Flexible input: Supports direct audio upload or public URL.

Parameters

ParameterRequiredDescription
audioYesUpload or link to an audio file (MP3 / WAV / M4A, etc.).
languageNoLanguage code for transcription; leave empty for auto-detection.
promptNoShort guidance text to steer transcription style or terminology.
enable_sync_modeNoWait for result before returning response (API only).

How to Use

  1. Upload your audio — drag and drop a file or paste a public URL.
  2. Select language (optional) — choose a specific language or let the model auto-detect.
  3. Add a prompt (optional) — guide formatting, punctuation, or specific terminology.
  4. Click Run and wait for transcription to complete.
  5. Copy or download the transcribed text.

Best Use Cases

  • Meeting Transcription — Convert recorded meetings, interviews, and calls into searchable text.
  • Subtitle Generation — Create accurate transcripts for video subtitling workflows.
  • Content Repurposing — Turn podcasts, webinars, and lectures into written content.
  • Voice Notes — Quickly transcribe voice memos and audio notes.
  • Accessibility — Generate transcripts to make audio content accessible.

Pricing

MetricPrice
Per second of audio$0.0007 / s

Total cost = duration of audio (in seconds) × $0.0007

Examples

  • 60s audio → 60 × $0.0007 = $0.042
  • 5 min (300s) audio → 300 × $0.0007 = $0.21
  • 30 min (1800s) audio → 1800 × $0.0007 = $1.26

Pro Tips for Best Quality

  • Use clear audio with minimal background noise for optimal accuracy.
  • Specify the language if auto-detection is inconsistent.
  • Add a prompt to guide transcription — include names, jargon, or formatting preferences.
  • For long recordings, consider splitting into smaller segments for faster processing.

Notes

  • Ensure uploaded audio URLs are publicly accessible.
  • Processing time varies based on audio duration and current queue load.

Authentication

For authentication details, please refer to the Authentication Guide.

API Endpoints

Submit Task & Query Result


# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/wavespeed-ai/openai-whisper" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
    "language": "auto",
    "enable_sync_mode": true
}'

# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"

Parameters

Task Submission Parameters

Request Parameters

ParameterTypeRequiredDefaultRangeDescription
audiostringYes--Audio file to transcribe. Provide an HTTPS URL or upload a file (MP3, WAV, FLAC up to 60 minutes).
languagestringNoautoauto, af, am, ar, as, az, ba, be, bg, bn, bo, br, bs, ca, cs, cy, da, de, el, en, es, et, eu, fa, fi, fo, fr, gl, gu, ha, haw, he, hi, hr, ht, hu, hy, id, is, it, ja, jw, ka, kk, km, kn, ko, la, lb, ln, lo, lt, lv, mg, mi, mk, ml, mn, mr, ms, mt, my, ne, nl, nn, no, oc, pa, pl, ps, pt, ro, ru, sa, sd, si, sk, sl, sn, so, sq, sr, su, sv, sw, ta, te, tg, th, tk, tl, tr, tt, uk, ur, uz, vi, yi, yo, zh, yueLanguage spoken in the audio. Set to 'auto' for automatic language detection (default).
promptstringNo-An optional text to provide as a prompt to guide the model's style or continue a previous audio segment. The prompt should be in the same language as the audio.
enable_sync_modebooleanNotrue-If set to true, the function will wait for the result to be generated and uploaded before returning the response. It allows you to get the result directly in the response. This property is only available through the API.

Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
data.idstringUnique identifier for the prediction, Task Id
data.modelstringModel ID used for the prediction
data.outputsarrayArray of URLs to the generated content (empty when status is not completed)
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.has_nsfw_contentsarrayArray of boolean values indicating NSFW detection for each output
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds

Result Request Parameters

ParameterTypeRequiredDefaultDescription
idstringYes-Task ID

Result Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
dataobjectThe prediction data object containing all details
data.idstringUnique identifier for the prediction, the ID of the prediction to get
data.modelstringModel ID used for the prediction
data.outputsstringObject containing URLs to the generated content (empty when status is not completed).
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds
© 2025 WaveSpeedAI. All rights reserved.