Seedance 2.0 | Special Offer ✦ 10% OFF NOW | Ends May 13 (UTC+0)
moonshot
moonshotai/kimi-k2.5

moonshotai/kimi-k2.5

262,144 context · $0.60/M input tokens · $3.00/M output tokens

Kimi K2.5 is Moonshot AI's native multimodal model, delivering state-of-the-art visual coding capability and a self-directed agent swarm paradigm. Built on Kimi K2 with continued pretraining over approximately 15T mixed...

Prezzi

Pay-per-use

Nessun costo iniziale, paga solo per ciò che usi

Input$0.60 / M Tokens
Output$3.00 / M Tokens

Prova il modello

moonshotai/kimi-k2.5
Online
moonshot
Ciao! Sono un assistente IA utile. Come posso aiutarti?

Utilizzo API

Usa i seguenti esempi di codice per integrare la nostra API:

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://llm.wavespeed.ai/v1"
)

response = client.chat.completions.create(
    model="moonshotai/kimi-k2.5",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)

print(response.choices[0].message.content)

Introduzione al modello

Moonshotai kimi-k2.5

Kimi K2.5 is Moonshot AI's native multimodal model, delivering state-of-the-art visual coding capability and a self-directed agent swarm paradigm. Bui

Kimi K2.5 is Moonshot AI's native multimodal model, delivering state-of-the-art visual coding capability and a self-directed agent swarm paradigm. Built on Kimi K2 with continued pretraining over approximately 15T mixed visual and text tokens, it delivers strong performance in general reasoning, visual coding, and agentic tool-calling.


Why It Looks Great

  • Large Language Model architecture for efficient processing
  • 262144 context window for long document handling
  • Competitive pricing at $0.6/$3.0 per million tokens

Key Features

  • Context Window: 262144 tokens
  • Max Output: N/A tokens
  • Vision: Supported
  • Function Calling: Supported

Specifications

SpecificationValue
ProviderMoonshotai
Model TypeLarge Language Model (LLM)
ArchitectureN/A
Context Window262144 tokens
Max Outputtokens
InputText
OutputText
VisionSupported
Function CallingSupported

Pricing

Token TypeCost per Million Tokens
Input$0.6
Output$3.0

How to Use

  1. Write your prompt — describe the task, provide context, and specify desired output format.
  2. Submit — the model processes your request and returns the response.

API Integration

Base URL: https://llm.wavespeed.ai/v1 API Endpoint: chat/completions Model ID: moonshotai/kimi-k2.5


API Usage

Python SDK

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://llm.wavespeed.ai/v1"
)

response = client.chat.completions.create(
    model="moonshotai/kimi-k2.5",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)

print(response.choices[0].message.content)

cURL

curl https://llm.wavespeed.ai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "moonshotai/kimi-k2.5",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Notes

  • Model: moonshotai/kimi-k2.5
  • Provider: Moonshotai

Info

Providermoonshot
Tipollm

Funzionalità supportate

Input
TestoImmagine
Output
Testo
Contesto262,144
Output massimo65,535
Vision✓ Supportato
Function Calling✓ Supportato

Guida all'accesso API

Base URLhttps://llm.wavespeed.ai/v1
API Endpointchat/completions
ID modellomoonshotai/kimi-k2.5

Kimi K2.5 API

moonshotai/kimi-k2.5

Kimi K2.5 is Moonshot AI's native multimodal model, delivering state-of-the-art visual coding capability and a self-directed agent swarm paradigm. Built on Kimi K2 with continued pretraining over approximately 15T mixed...

Input

$0.6 /M

Output

$3 /M

Contesto

262K

Output max

66K

Vision

Supportato

Uso strumenti

Supportato

Prova Kimi K2.5 su WaveSpeedAI

Accedi a Kimi K2.5 tramite la nostra API unificata — compatibile con OpenAI, senza cold start, prezzi trasparenti.

Domande frequenti su Kimi K2.5

Quanto costa Kimi K2.5 via API?+

Prezzi su WaveSpeedAI: $0.60 per milione di token in input e $3.00 per milione di token in output. Prompt caching e batch processing sono fatturati separatamente e riducono il costo effettivo su carichi lunghi e ripetitivi.

Qual è la context window di Kimi K2.5?+

Kimi K2.5 supporta fino a 262K token di contesto e fino a 66K token di output per richiesta.

Kimi K2.5 è compatibile con OpenAI?+

Sì. WaveSpeedAI espone Kimi K2.5 tramite un endpoint compatibile con OpenAI all'indirizzo https://llm.wavespeed.ai/v1. Punta l'SDK ufficiale di OpenAI a questa base URL con la tua API key WaveSpeedAI — senza altre modifiche al codice.

Come si inizia con Kimi K2.5?+

Accedi a WaveSpeedAI, crea una API key in Access Keys, poi invia una richiesta a https://llm.wavespeed.ai/v1/chat/completions con il model id mostrato sopra. I nuovi account ricevono crediti gratuiti per testare Kimi K2.5.

API LLM correlate