Seedance 2.0 | Special Offer ✦ 10% OFF NOW
meta
meta-llama/llama-3.2-11b-vision-instruct

meta-llama/llama-3.2-11b-vision-instruct

131,072 context · $0.24/M input tokens · $0.24/M output tokens

Llama 3.2 11B Vision is a multimodal model with 11 billion parameters, designed to handle tasks combining visual and textual data. It excels in tasks such as image captioning and...

Harga

Bayar sesuai pemakaian

Tanpa biaya di muka, bayar hanya sesuai penggunaan

Input$0.24 / M Tokens
Output$0.24 / M Tokens

Penggunaan API

Gunakan contoh kode berikut untuk integrasi dengan API kami:

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://llm.wavespeed.ai/v1"
)

response = client.chat.completions.create(
    model="meta-llama/llama-3.2-11b-vision-instruct",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)

print(response.choices[0].message.content)

Pengenalan Model

Meta-Llama llama-3.2-11b-vision-instruct

meta-llama llama-3.2-11b-vision-instruct


Why It Looks Great

  • Large Language Model architecture for efficient processing
  • 131072 context window for long document handling
  • Competitive pricing at $0.0/$0.0 per million tokens

Key Features

  • Context Window: 131072 tokens
  • Max Output: 16384 tokens
  • Vision: Supported
  • Function Calling: Supported

Specifications

SpecificationValue
ProviderMeta-Llama
Model TypeLarge Language Model (LLM)
ArchitectureN/A
Context Window131072 tokens
Max Output16384 tokens
InputText
OutputText
VisionSupported
Function CallingSupported

Pricing

Token TypeCost per Million Tokens
Input$0.0
Output$0.0

How to Use

  1. Write your prompt — describe the task, provide context, and specify desired output format.
  2. Submit — the model processes your request and returns the response.

API Integration

Base URL: https://llm.wavespeed.ai/v1 API Endpoint: chat/completions Model ID: meta-llama/llama-3.2-11b-vision-instruct


API Usage

Python SDK

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://llm.wavespeed.ai/v1"
)

response = client.chat.completions.create(
    model="meta-llama/llama-3.2-11b-vision-instruct",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)

print(response.choices[0].message.content)

cURL

curl https://llm.wavespeed.ai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "meta-llama/llama-3.2-11b-vision-instruct",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Notes

  • Model: meta-llama/llama-3.2-11b-vision-instruct
  • Provider: Meta-Llama

Info

Penyediameta
Tipellm

Fitur yang Didukung

Input
TeksGambar
Output
Teks
Konteks131,072
Output Maks16,384
Vision✓ Didukung
Function Calling-

Panduan Akses API

Base URLhttps://llm.wavespeed.ai/v1
API Endpointchat/completions
Model IDmeta-llama/llama-3.2-11b-vision-instruct

Llama 3.2 11b Vision Instruct API

meta-llama/llama-3.2-11b-vision-instruct

Llama 3.2 11B Vision is a multimodal model with 11 billion parameters, designed to handle tasks combining visual and textual data. It excels in tasks such as image captioning and...

Input

$0.245 /M

Output

$0.245 /M

Konteks

131K

Output Maks.

16K

Vision

Didukung

Coba Llama 3.2 11b Vision Instruct di WaveSpeedAI

Akses Llama 3.2 11b Vision Instruct melalui API terpadu kami — kompatibel dengan OpenAI, tanpa cold start, harga transparan.

Buka Playground

Pertanyaan Umum tentang Llama 3.2 11b Vision Instruct

Berapa biaya Llama 3.2 11b Vision Instruct melalui API?+

Harga di WaveSpeedAI: $0.24 per juta token input dan $0.24 per juta token output. Prompt caching dan batch processing ditagih terpisah dan mengurangi biaya efektif pada beban kerja yang panjang dan berulang.

Berapa context window Llama 3.2 11b Vision Instruct?+

Llama 3.2 11b Vision Instruct mendukung hingga 131K token konteks dengan hingga 16K token output per permintaan.

Apakah Llama 3.2 11b Vision Instruct kompatibel dengan OpenAI?+

Ya. WaveSpeedAI menyediakan Llama 3.2 11b Vision Instruct melalui endpoint yang kompatibel dengan OpenAI di https://llm.wavespeed.ai/v1. Arahkan OpenAI SDK resmi ke base URL ini dengan API key WaveSpeedAI Anda — tanpa perubahan kode lainnya.

Bagaimana memulai dengan Llama 3.2 11b Vision Instruct?+

Masuk ke WaveSpeedAI, buat API key di Access Keys, lalu kirim permintaan ke https://llm.wavespeed.ai/v1/chat/completions dengan model id seperti ditampilkan di atas. Akun baru menerima kredit gratis untuk menguji Llama 3.2 11b Vision Instruct.

API LLM terkait