Explore/wavespeed-ai-dia-tts

text-to-audio

wavespeed-ai/dia-tts

Dia directly generates realistic dialogue from transcripts. Audio conditioning enables emotion control. Produces natural nonverbals like laughter and throat clearing. will cost $0.04 per 1000 character.

Idle

Your request will cost $0.04 per run.

For $1 you can run this model approximately 25 times.

README

DIA-TTS is a 1.6 billion parameter TTS model that excels in producing lifelike dialogue from textual input. Unlike traditional TTS systems that often sound monotonous or robotic, DIA-TTS captures the nuances of human speech, including emotional tone and non-verbal cues. This capability is achieved through advanced audio conditioning, allowing users to guide the model's output by providing reference audio clips to influence the speaker’s emotional delivery and intonation.、

Key Features

  • Emotion and Tone Control: DIA-TTS allows for audio conditioning, enabling precise control over the emotion and tone of the generated speech. Users can guide the model’s output by providing reference audio clips to influence the speaker’s emotional delivery and intonation .
  • Nonverbal Communication Generation: The model can seamlessly interpret tags like (laughs), (coughs), and reproduce them as natural sounds, enhancing the realism of the synthesized speech .
  • Multi-Speaker Dialogue Synthesis: DIA-TTS supports speaker tagging using [S1], [S2], etc., enabling the generation of multi-character conversations from text. This feature is particularly useful for applications like audiobooks and interactive storytelling .
  • Open-Source and Accessible: DIA-TTS is available under an open-source Apache 2.0 license, with pretrained model checkpoints and inference code accessible via Hugging Face and GitHub, facilitating community-driven innovation and transparency.

Use Cases

  • Virtual Assistants and Chatbots: Enhance user interactions by providing emotionally expressive and natural-sounding responses.
  • Audiobook and Podcast Production: Generate engaging and dynamic narration with appropriate emotional tone and non-verbal cues.
  • Interactive Storytelling and Gaming: Create immersive experiences by synthesizing multi-character dialogues with distinct voices and emotions.
  • Accessibility Tools: Develop assistive technologies that offer more natural and expressive speech output for users with visual or reading impairments.

Accelerated Inference

Our accelerated inference approach leverages advanced optimization technology from WavespeedAI. This innovative fusion technique significantly reduces computational overhead and latency, enabling rapid image generation without compromising quality. The entire system is designed to efficiently handle large-scale inference tasks while ensuring that real-time applications achieve an optimal balance between speed and accuracy. For further details, please refer to the blog post.