meta-llama/llama-3-8b-instruct
8,192 context · $0.03/M input tokens · $0.04/M output tokens
Meta's latest class of model (Llama 3) launched with a variety of sizes & flavors. This 8B instruct-tuned version was optimized for high quality dialogue usecases. It has demonstrated strong...
Pagamento por uso
Sem custo inicial, pague apenas pelo que usar
Use os exemplos de código abaixo para integrar com nossa API:
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://llm.wavespeed.ai/v1"
)
response = client.chat.completions.create(
model="meta-llama/llama-3-8b-instruct",
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)Meta's latest class of model (Llama 3) launched with a variety of sizes & flavors
Meta's latest class of model (Llama 3) launched with a variety of sizes & flavors. This 8B instruct-tuned version was optimized for high quality dialogue usecases.
It has demonstrated strong performance compared to leading closed-source models in human evaluations.
To read more about the model release, click here. Usage of this model is subject to Meta's Acceptable Use Policy.
| Specification | Value |
|---|---|
| Provider | Meta-Llama |
| Model Type | Large Language Model (LLM) |
| Architecture | N/A |
| Context Window | 8192 tokens |
| Max Output | 16384 tokens |
| Input | Text |
| Output | Text |
| Vision | Supported |
| Function Calling | Supported |
| Token Type | Cost per Million Tokens |
|---|---|
| Input | $0.0 |
| Output | $0.0 |
Base URL: https://llm.wavespeed.ai/v1 API Endpoint: chat/completions Model ID: meta-llama/llama-3-8b-instruct
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://llm.wavespeed.ai/v1"
)
response = client.chat.completions.create(
model="meta-llama/llama-3-8b-instruct",
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)
curl https://llm.wavespeed.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "meta-llama/llama-3-8b-instruct",
"messages": [{"role": "user", "content": "Hello!"}]
}'
meta-llama/llama-3-8b-instruct
Meta's latest class of model (Llama 3) launched with a variety of sizes & flavors. This 8B instruct-tuned version was optimized for high quality dialogue usecases. It has demonstrated strong...
Entrada
$0.03 /M
Saída
$0.04 /M
Contexto
8K
Saída máx.
16K
Uso de ferramentas
Suportado
Acesse Llama 3 8b Instruct através da nossa API unificada — compatível com OpenAI, sem inicializações a frio, preços transparentes.
Abrir PlaygroundPreços no WaveSpeedAI: $0.03 por milhão de tokens de entrada e $0.04 por milhão de tokens de saída. Prompt caching e batch processing são cobrados separadamente e reduzem o custo efetivo em cargas longas e repetitivas.
Llama 3 8b Instruct suporta até 8K tokens de contexto e até 16K tokens de saída por requisição.
Sim. O WaveSpeedAI expõe o Llama 3 8b Instruct através de um endpoint compatível com OpenAI em https://llm.wavespeed.ai/v1. Aponte o SDK oficial da OpenAI para esta base URL com sua chave API do WaveSpeedAI — sem outras alterações no código.
Entre no WaveSpeedAI, crie uma chave API em Access Keys, então envie uma requisição para https://llm.wavespeed.ai/v1/chat/completions com o model id mostrado acima. Contas novas recebem créditos grátis para avaliar o Llama 3 8b Instruct.