google/gemma-3n-e4b-it
32,768 context · $0.06/M input tokens · $0.12/M output tokens
Gemma 3n E4B-it is optimized for efficient execution on mobile and low-resource devices, such as phones, laptops, and tablets. It supports multimodal inputs—including text, visual data, and audio—enabling diverse tasks...
Płać za użycie
Bez kosztów początkowych, płacisz tylko za to, czego używasz
Użyj poniższych przykładów kodu, aby zintegrować się z naszym API:
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://llm.wavespeed.ai/v1"
)
response = client.chat.completions.create(
model="google/gemma-3n-e4b-it",
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)google gemma-3n-e4b-it
| Specification | Value |
|---|---|
| Provider | |
| Model Type | Large Language Model (LLM) |
| Architecture | N/A |
| Context Window | 32768 tokens |
| Max Output | 4096 tokens |
| Input | Text |
| Output | Text |
| Vision | Supported |
| Function Calling | Supported |
| Token Type | Cost per Million Tokens |
|---|---|
| Input | $0.0 |
| Output | $0.0 |
Base URL: https://llm.wavespeed.ai/v1 API Endpoint: chat/completions Model ID: google/gemma-3n-e4b-it
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://llm.wavespeed.ai/v1"
)
response = client.chat.completions.create(
model="google/gemma-3n-e4b-it",
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)
curl https://llm.wavespeed.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "google/gemma-3n-e4b-it",
"messages": [{"role": "user", "content": "Hello!"}]
}'
Gemma 3n E4B-it is optimized for efficient execution on mobile and low-resource devices, such as phones, laptops, and tablets. It supports multimodal inputs—including text, visual data, and audio—enabling diverse tasks...
Input
$0.06 /M
Output
$0.12 /M
Context
33K
Max Output
4K
Access google/gemma-3n-e4b-it through our unified API — OpenAI-compatible, no cold starts, transparent pricing.
Open Playground