Seedance 2.0 | Special Offer ✦ 10% OFF NOW | Ends May 13 (UTC+0)
google
google/gemma-3-27b-it

google/gemma-3-27b-it

131,072 context · $0.08/M input tokens · $0.16/M output tokens

Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It handles context windows up to 128k tokens, understands over 140 languages, and offers improved math, reasoning, and chat capabilities,...

Prezzi

Pay-per-use

Nessun costo iniziale, paga solo per ciò che usi

Input$0.08 / M Tokens
Output$0.16 / M Tokens

Utilizzo API

Usa i seguenti esempi di codice per integrare la nostra API:

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://llm.wavespeed.ai/v1"
)

response = client.chat.completions.create(
    model="google/gemma-3-27b-it",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)

print(response.choices[0].message.content)

Introduzione al modello

Google gemma-3-27b-it

Gemma 3 introduces multimodality, supporting vision-language input and text outputs

Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It handles context windows up to 128k tokens, understands over 140 languages, and offers improved math, reasoning, and chat capabilities, including structured outputs and function calling. Gemma 3 27B is Google's latest open source model, successor to Gemma 2


Why It Looks Great

  • Large Language Model architecture for efficient processing
  • 128000 context window for long document handling
  • Competitive pricing at $0.0/$0.2 per million tokens

Key Features

  • Context Window: 128000 tokens
  • Max Output: 65536 tokens
  • Vision: Supported
  • Function Calling: Supported

Specifications

SpecificationValue
ProviderGoogle
Model TypeLarge Language Model (LLM)
ArchitectureN/A
Context Window128000 tokens
Max Output65536 tokens
InputText
OutputText
VisionSupported
Function CallingSupported

Pricing

Token TypeCost per Million Tokens
Input$0.0
Output$0.2

How to Use

  1. Write your prompt — describe the task, provide context, and specify desired output format.
  2. Submit — the model processes your request and returns the response.

API Integration

Base URL: https://llm.wavespeed.ai/v1 API Endpoint: chat/completions Model ID: google/gemma-3-27b-it


API Usage

Python SDK

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://llm.wavespeed.ai/v1"
)

response = client.chat.completions.create(
    model="google/gemma-3-27b-it",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)

print(response.choices[0].message.content)

cURL

curl https://llm.wavespeed.ai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "google/gemma-3-27b-it",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Notes

  • Model: google/gemma-3-27b-it
  • Provider: Google

Info

Providergoogle
Tipollm

Funzionalità supportate

Input
TestoImmagine
Output
Testo
Contesto131,072
Output massimo16,384
Vision✓ Supportato
Function Calling✓ Supportato

Guida all'accesso API

Base URLhttps://llm.wavespeed.ai/v1
API Endpointchat/completions
ID modellogoogle/gemma-3-27b-it

Gemma 3 27b It API

google/gemma-3-27b-it

Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It handles context windows up to 128k tokens, understands over 140 languages, and offers improved math, reasoning, and chat capabilities,...

Input

$0.08 /M

Output

$0.16 /M

Contesto

131K

Output max

16K

Vision

Supportato

Uso strumenti

Supportato

Prova Gemma 3 27b It su WaveSpeedAI

Accedi a Gemma 3 27b It tramite la nostra API unificata — compatibile con OpenAI, senza cold start, prezzi trasparenti.

Domande frequenti su Gemma 3 27b It

Quanto costa Gemma 3 27b It via API?+

Prezzi su WaveSpeedAI: $0.08 per milione di token in input e $0.16 per milione di token in output. Prompt caching e batch processing sono fatturati separatamente e riducono il costo effettivo su carichi lunghi e ripetitivi.

Qual è la context window di Gemma 3 27b It?+

Gemma 3 27b It supporta fino a 131K token di contesto e fino a 16K token di output per richiesta.

Gemma 3 27b It è compatibile con OpenAI?+

Sì. WaveSpeedAI espone Gemma 3 27b It tramite un endpoint compatibile con OpenAI all'indirizzo https://llm.wavespeed.ai/v1. Punta l'SDK ufficiale di OpenAI a questa base URL con la tua API key WaveSpeedAI — senza altre modifiche al codice.

Come si inizia con Gemma 3 27b It?+

Accedi a WaveSpeedAI, crea una API key in Access Keys, poi invia una richiesta a https://llm.wavespeed.ai/v1/chat/completions con il model id mostrato sopra. I nuovi account ricevono crediti gratuiti per testare Gemma 3 27b It.

API LLM correlate