Seedance 2.0 | Special Offer ✦ 10% OFF NOW | Ends May 13 (UTC+0)
mistral
mistralai/devstral-small

mistralai/devstral-small

131,072 context · $0.10/M input tokens · $0.30/M output tokens

Devstral Small 1.1 is a 24B parameter open-weight language model for software engineering agents, developed by Mistral AI in collaboration with All Hands AI. Finetuned from Mistral Small 3.1 and...

定價

按用量付費

無需預付費用,僅按實際使用量付費

輸入$0.10 / M Tokens
輸出$0.30 / M Tokens

試用模型

mistralai/devstral-small
線上
mistral
嗨!我是樂於助人的 AI 助理。有什麼可以幫你的嗎?

API 使用

使用以下程式碼範例整合我們的 API:

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://llm.wavespeed.ai/v1"
)

response = client.chat.completions.create(
    model="mistralai/devstral-small",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)

print(response.choices[0].message.content)

模型介紹

Mistralai devstral-small

Devstral Small 1

Devstral Small 1.1 is a 24B parameter open-weight language model for software engineering agents, developed by Mistral AI in collaboration with All Hands AI. Finetuned from Mistral Small 3.1 and released under the Apache 2.0 license, it features a 128k token context window and supports both Mistral-style function calling and XML output formats.

Designed for agentic coding workflows, Devstral Small 1.1 is optimized for tasks such as codebase exploration, multi-file edits, and integration into autonomous development agents like OpenHands and Cline. It achieves 53.6% on SWE-Bench Verified, surpassing all other open models on this benchmark, while remaining lightweight enough to run on a single 4090 GPU or Apple silicon machine. The model uses a Tekken tokenizer with a 131k vocabulary and is deployable via vLLM, Transformers, Ollama, LM Studio, and other OpenAI-compatible runtimes.


Why It Looks Great

  • Large Language Model architecture for efficient processing
  • 131072 context window for long document handling
  • Competitive pricing at $0.1/$0.3 per million tokens

Key Features

  • Context Window: 131072 tokens
  • Max Output: N/A tokens
  • Vision: Supported
  • Function Calling: Supported

Specifications

SpecificationValue
ProviderMistralai
Model TypeLarge Language Model (LLM)
ArchitectureN/A
Context Window131072 tokens
Max Outputtokens
InputText
OutputText
VisionSupported
Function CallingSupported

Pricing

Token TypeCost per Million Tokens
Input$0.1
Output$0.3

How to Use

  1. Write your prompt — describe the task, provide context, and specify desired output format.
  2. Submit — the model processes your request and returns the response.

API Integration

Base URL: https://llm.wavespeed.ai/v1 API Endpoint: chat/completions Model ID: mistralai/devstral-small


API Usage

Python SDK

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://llm.wavespeed.ai/v1"
)

response = client.chat.completions.create(
    model="mistralai/devstral-small",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)

print(response.choices[0].message.content)

cURL

curl https://llm.wavespeed.ai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "mistralai/devstral-small",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Notes

  • Model: mistralai/devstral-small
  • Provider: Mistralai

資訊

提供商mistral
類型llm

支援功能

輸入
文字
輸出
文字
上下文131,072
最大輸出-
視覺-
函式呼叫✓ 支援

API 存取指南

Base URLhttps://llm.wavespeed.ai/v1
API 端點chat/completions
Model IDmistralai/devstral-small

Devstral Small API

mistralai/devstral-small

Devstral Small 1.1 is a 24B parameter open-weight language model for software engineering agents, developed by Mistral AI in collaboration with All Hands AI. Finetuned from Mistral Small 3.1 and...

輸入

$0.1 /M

輸出

$0.3 /M

上下文

131K

工具調用

支援

在 WaveSpeedAI 試用 Devstral Small

透過我們的統一 API 接入 Devstral Small — 相容 OpenAI、無冷啟動、透明計費。

關於 Devstral Small 的常見問題

Devstral Small API 多少錢?+

WaveSpeedAI 定價:輸入每百萬 token $0.10,輸出每百萬 token $0.30。Prompt 快取與批次處理分別計費,可顯著降低長上下文、高重複任務的實際成本。

Devstral Small 的上下文視窗有多大?+

Devstral Small 每次請求最多支援 131K 上下文 token,輸出最多 — token。

Devstral Small 是否相容 OpenAI?+

是的。WaveSpeedAI 透過 https://llm.wavespeed.ai/v1 的 OpenAI 相容端點提供 Devstral Small。將官方 OpenAI SDK 的 base URL 指向該位址,使用 WaveSpeedAI 的 API Key 即可,無需其他程式碼變更。

如何開始使用 Devstral Small?+

登入 WaveSpeedAI,在 Access Keys 建立 API Key,使用上方顯示的 model id 向 https://llm.wavespeed.ai/v1/chat/completions 發送請求。新帳號將獲得免費額度,用於試用 Devstral Small。

相關 LLM API