Browse ModelsElevenlabsElevenlabs Eleven V3 Timing

Elevenlabs Eleven V3 Timing

Elevenlabs Eleven V3 Timing

Playground

Try it on WavespeedAI!

ElevenLabs Eleven-V3 Timing converts text to natural speech and returns alignment metadata—character/word timestamps in JSON—for precise subtitles, karaoke effects, and lip-sync. Supports voice_id, similarity/stability, and optional Speaker Boost. Priced at $0.10 per 1,000 characters. Ready-to-use REST inference API, best performance, no cold starts, affordable pricing.

Features

ElevenLabs — Eleven V3 TTS with Alignment

Eleven V3 (Alignment) turns text into natural speech and, at the same time, returns precise timing data for every character and word. You get an audio file plus alignment metadata, so you can drop the voice straight onto a timeline for subtitles, karaoke, lip-sync and fine-grained editing.


🎧 What this model does

  • Generates high-quality speech with natural pronunciation, pacing and intonation.
  • Returns alignment metadata with per-character / per-word timestamps (start–end in seconds).
  • Makes it easy to build auto-subtitles, word-highlighting, reading trainers, talking avatars and precise dubbing tools.

Compared with a normal TTS model (audio only), this version also outputs an alignment object containing, for example:

  • a list of characters or words
  • corresponding end times in seconds (and, where available, start times)

so the text and audio are tightly locked to each other.


🧩 Typical uses

  • Automatic subtitle generation with timecodes (SRT / VTT).
  • Word-by-word highlighting or karaoke-style lyrics.
  • Lip-sync for digital humans or 2D/3D characters driven by word/phoneme timing.
  • Precise VO replacement and edit point detection inside existing videos.
  • Language-learning apps: follow-along reading, shadowing and pronunciation practice.

🔧 Input parameters

  • text (required) – Script to be spoken. Recommended up to 5,000 characters per call.

  • voice_id (required) – Which Eleven voice to use (for example: Alice, Elli, George).

  • similarity (0–1) – How closely the output should match the base voice’s timbre and style.

  • stability (0–1) – Higher values give more consistent delivery; lower values allow more expressive variation.

  • use_speaker_boost (bool) – English text normalisation that improves numbers, dates and measurements.


📤 Output format

Each run returns:

  • audio – URL of the generated audio file (MP3).

  • alignment – JSON metadata including:

    • list of characters and/or words in order
    • corresponding timing arrays (for example character_end_times_seconds) with values in seconds

You can parse this metadata to:

  • build subtitles with exact in/out times
  • drive on-screen highlighting in sync with the audio
  • control animation, lip-sync or visual effects based on specific words

💰 Pricing

Billing is based on the length of the input text.

  • Base rate: 0.10 USD per 1,000 characters
  • The character count is rounded up to the next 1,000 characters. Input less than 1000 characters will be charged as 1000 characters.

Anything below 1,000 characters is still billed as one full 1,000-character block.


🚀 How to use

  1. Fill text with your script.
  2. Select a voice_id from the supported ElevenLabs voices.
  3. Optionally tune similarity, stability, and enable use_speaker_boost for English number-heavy content.
  4. Run the request.
  5. Download the audio file and read the alignment JSON to build subtitles, highlights or animation timing.

📝 Notes and tips

  • Punctuation matters: clear sentence boundaries improve rhythm and alignment accuracy.
  • For very long scripts, split them into several calls if you need section-level control on the timeline.
  • If you see errors like “invalid voice id”, cross-check with the most recent voice list and update the parameter value.
  • When building players or editors, always treat the alignment timings (in seconds) as the single source of truth for where each character or word should appear.

Authentication

For authentication details, please refer to the Authentication Guide.

API Endpoints

Submit Task & Query Result


# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/elevenlabs/eleven-v3/timing" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
    "text": "Welcome to our advanced text-to-speech system! Experience high-quality voice synthesis with natural pronunciation and clear articulation.",
    "voice_id": "Alice",
    "similarity": 1,
    "stability": 0.5,
    "use_speaker_boost": true
}'

# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"

Parameters

Task Submission Parameters

Request Parameters

ParameterTypeRequiredDefaultRangeDescription
textstringYesWelcome to our advanced text-to-speech system! Experience high-quality voice synthesis with natural pronunciation and clear articulation.-Text to convert to speech. Every character is 1 token. Maximum 10000 characters. Use <#x#> between words to control pause duration (0.01-99.99s).
voice_idstringYesAliceAria, Roger, Sarah, Laura, Charlie, George, Callum, River, Liam, Charlotte, Alice, Matilda, Will, Jessica, Eric, Chris, Brian, Daniel, Lily, BillThe voice to use for speech generation
similaritynumberNo10.00 ~ 1.00High enhancement boosts overall voice clarity and target speaker similarity. Very high values can cause artifacts, so adjusting this setting to find the optimal value is encouraged.
stabilitynumberNo0.50.00 ~ 1.00Voice stability (0-1) Default value: 0.5
use_speaker_boostbooleanNotrue-This parameter supports English text normalization, which improves performance in number-reading scenarios.

Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
data.idstringUnique identifier for the prediction, Task Id
data.modelstringModel ID used for the prediction
data.outputsarrayArray of URLs to the generated content (empty when status is not completed)
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.has_nsfw_contentsarrayArray of boolean values indicating NSFW detection for each output
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds

Result Request Parameters

ParameterTypeRequiredDefaultDescription
idstringYes-Task ID

Result Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
dataobjectThe prediction data object containing all details
data.idstringUnique identifier for the prediction, the ID of the prediction to get
data.modelstringModel ID used for the prediction
data.outputsstringArray of URLs to the generated content (empty when status is not completed).
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds
© 2025 WaveSpeedAI. All rights reserved.