meta-llama/llama-3-70b-instruct
8,192 context · $0.51/M input tokens · $0.74/M output tokens
Meta's latest class of model (Llama 3) launched with a variety of sizes & flavors. This 70B instruct-tuned version was optimized for high quality dialogue usecases. It has demonstrated strong...
Bayar sesuai pemakaian
Tanpa biaya di muka, bayar hanya sesuai penggunaan
Gunakan contoh kode berikut untuk integrasi dengan API kami:
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://llm.wavespeed.ai/v1"
)
response = client.chat.completions.create(
model="meta-llama/llama-3-70b-instruct",
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)Meta's latest class of model (Llama 3) launched with a variety of sizes & flavors
Meta's latest class of model (Llama 3) launched with a variety of sizes & flavors. This 70B instruct-tuned version was optimized for high quality dialogue usecases.
It has demonstrated strong performance compared to leading closed-source models in human evaluations.
To read more about the model release, click here. Usage of this model is subject to Meta's Acceptable Use Policy.
| Specification | Value |
|---|---|
| Provider | Meta-Llama |
| Model Type | Large Language Model (LLM) |
| Architecture | N/A |
| Context Window | 8192 tokens |
| Max Output | 8000 tokens |
| Input | Text |
| Output | Text |
| Vision | Supported |
| Function Calling | Supported |
| Token Type | Cost per Million Tokens |
|---|---|
| Input | $0.6 |
| Output | $0.8 |
Base URL: https://llm.wavespeed.ai/v1 API Endpoint: chat/completions Model ID: meta-llama/llama-3-70b-instruct
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://llm.wavespeed.ai/v1"
)
response = client.chat.completions.create(
model="meta-llama/llama-3-70b-instruct",
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)
curl https://llm.wavespeed.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "meta-llama/llama-3-70b-instruct",
"messages": [{"role": "user", "content": "Hello!"}]
}'
meta-llama/llama-3-70b-instruct
Meta's latest class of model (Llama 3) launched with a variety of sizes & flavors. This 70B instruct-tuned version was optimized for high quality dialogue usecases. It has demonstrated strong...
Input
$0.51 /M
Output
$0.74 /M
Konteks
8K
Output Maks.
8K
Akses Llama 3 70b Instruct melalui API terpadu kami — kompatibel dengan OpenAI, tanpa cold start, harga transparan.
Buka PlaygroundHarga di WaveSpeedAI: $0.51 per juta token input dan $0.74 per juta token output. Prompt caching dan batch processing ditagih terpisah dan mengurangi biaya efektif pada beban kerja yang panjang dan berulang.
Llama 3 70b Instruct mendukung hingga 8K token konteks dengan hingga 8K token output per permintaan.
Ya. WaveSpeedAI menyediakan Llama 3 70b Instruct melalui endpoint yang kompatibel dengan OpenAI di https://llm.wavespeed.ai/v1. Arahkan OpenAI SDK resmi ke base URL ini dengan API key WaveSpeedAI Anda — tanpa perubahan kode lainnya.
Masuk ke WaveSpeedAI, buat API key di Access Keys, lalu kirim permintaan ke https://llm.wavespeed.ai/v1/chat/completions dengan model id seperti ditampilkan di atas. Akun baru menerima kredit gratis untuk menguji Llama 3 70b Instruct.