NEW YEAR SALE: Get 15% Extra Credits, up to $150.Top Up Now!
Home/Explore/OpenAI Models/wavespeed-ai/openai-whisper-with-video
speech-to-text

speech-to-text

OpenAI Whisper Large v3

wavespeed-ai/openai-whisper-with-video

OpenAI Whisper Large v3 (Video-to-Text) delivers high-accuracy multilingual transcription directly from video files, with automatic language detection and optional timestamped, subtitle-ready segments. Built for stable production use with a ready-to-use REST API, fast response, no cold starts, and predictable pricing.

Hint: You can drag and drop a file or click to upload

Enable to generate word-level timestamps for the transcription. Note: This may increase processing time.
If set to true, the function will wait for the result to be generated and uploaded before returning the response. It allows you to get the result directly in the response. This property is only available through the API.

Idle

{ "text": "Distinguished guests and dear friends, good evening. Time flies, and here we are tonight, gathered together for this wonderful celebration. Thank you all for being here. Your presence truly makes this evening shine. Tonight, let's set aside our daily routines and embrace the joy and warmth around us. May the laughter, music, and memories we create become treasures we carry in our hearts. A magical and unforgettable night ahead." }

Your request will cost $0.001 per run.

For $1 you can run this model approximately 1000 times.

ExamplesView all

README

OpenAI Whisper (Large-v3) — Video-to-Text

OpenAI Whisper — Video-to-Text is a production-ready speech recognition endpoint powered by Whisper large-v3. It transcribes or translates speech directly from video files by extracting audio and returning clean, readable text, with optional word-level timestamps for subtitle and alignment workflows.

Built for stable production use with a ready-to-use REST API, no cold starts, and predictable pay-per-second pricing.

Key capabilities

  • Video input support (audio is extracted automatically)
  • Two tasks: transcribe and translate
  • Language selection: auto detection or manual language code
  • Optional word-level timestamps via enable_timestamps
  • Optional sync response via enable_sync_mode (API only)

Parameters

ParameterRequiredDescription
videoYesInput video (upload or public URL).
languageNoLanguage code or auto (default).
taskNotranscribe or translate.
enable_timestampsNoGenerate word-level timestamps (may increase processing time).
promptNoShort guidance text to steer transcription/translation style.
enable_sync_modeNoAPI only: wait for result and return it directly in the response.

How to use

  1. Upload video (or paste a public video URL).

  2. Set language:

    • Use auto for most cases.
    • Choose a specific language code if detection is unstable.
  3. Choose task:

    • transcribe for same-language transcription
    • translate for translated output
  4. (Optional) Enable enable_timestamps if you need subtitle timing/alignment.

  5. (Optional) Add a prompt to guide formatting or terminology (names, jargon, punctuation).

  6. Run and read the transcript output.

API note: enable_sync_mode is not shown as a normal UI option; it’s only available through the API.

Pricing

Modeenable_timestampsPrice per second
Standardfalse$0.001 / s
Timestampedtrue$0.002 / s

Examples

Video lengthStandardTimestamped
60s$0.06$0.12
600s (10 min)$0.60$1.20

Notes

  • If you use a URL, it must be publicly accessible; the UI showing a preview thumbnail is a good sanity check.
  • Timestamps are best for subtitles and editing, but may take longer to process.
  • For best accuracy, use clear speech and minimize background music/noise.

More Models to Try

  • OpenAI Whisper Turbo on WaveSpeedAI — Faster, cost-efficient speech-to-text for real-time or high-volume transcription pipelines while keeping strong multilingual recognition quality.