liquid/lfm-2-24b-a2b
32,768 context · $0.03/M input tokens · $0.12/M output tokens
LFM2-24B-A2B is the largest model in the LFM2 family of hybrid architectures designed for efficient on-device deployment. Built as a 24B parameter Mixture-of-Experts model with only 2B active parameters per token, it delivers high-quality generation while maintaining low inference costs. The model fits within 32 GB of RAM, making it practical to run on consumer laptops and desktops without sacrificing capability.
按量付费
无需预付费用,仅按实际使用量付费
使用以下代码示例接入我们的 API:
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://llm.wavespeed.ai/v1"
)
response = client.chat.completions.create(
model="liquid/lfm-2-24b-a2b",
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)LFM2-24B-A2B is the largest model in the LFM2 family of hybrid architectures designed for efficient on-device deployment
LFM2-24B-A2B is the largest model in the LFM2 family of hybrid architectures designed for efficient on-device deployment. Built as a 24B parameter Mixture-of-Experts model with only 2B active parameters per token, it delivers high-quality generation while maintaining low inference costs. The model fits within 32 GB of RAM, making it practical to run on consumer laptops and desktops without sacrificing capability.
| Specification | Value |
|---|---|
| Provider | Liquid |
| Model Type | Large Language Model (LLM) |
| Architecture | N/A |
| Context Window | 32768 tokens |
| Max Output | tokens |
| Input | Text |
| Output | Text |
| Vision | Supported |
| Function Calling | Supported |
| Token Type | Cost per Million Tokens |
|---|---|
| Input | $0.0 |
| Output | $0.1 |
Base URL: https://llm.wavespeed.ai/v1 API Endpoint: chat/completions Model ID: liquid/lfm-2-24b-a2b
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://llm.wavespeed.ai/v1"
)
response = client.chat.completions.create(
model="liquid/lfm-2-24b-a2b",
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)
curl https://llm.wavespeed.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "liquid/lfm-2-24b-a2b",
"messages": [{"role": "user", "content": "Hello!"}]
}'
liquid/lfm-2-24b-a2b
LFM2-24B-A2B is the largest model in the LFM2 family of hybrid architectures designed for efficient on-device deployment. Built as a 24B parameter Mixture-of-Experts model with only 2B active parameters per token, it delivers high-quality generation while maintaining low inference costs. The model fits within 32 GB of RAM, making it practical to run on consumer laptops and desktops without sacrificing capability.
输入
$0.03 /M
输出
$0.12 /M
上下文
33K
通过我们的统一 API 接入 Lfm 2 24b A2b — 兼容 OpenAI、无冷启动、透明计费。
WaveSpeedAI 定价:输入每百万 token $0.03,输出每百万 token $0.12。Prompt 缓存和批处理单独计费,可显著降低长上下文、高重复任务的实际成本。
Lfm 2 24b A2b 单次请求最多支持 33K 上下文 token,输出最多 — token。
是的。WaveSpeedAI 通过 https://llm.wavespeed.ai/v1 的 OpenAI 兼容端点提供 Lfm 2 24b A2b。把官方 OpenAI SDK 的 base URL 指向该地址,使用 WaveSpeedAI 的 API Key 即可,无需任何其他代码改动。
登录 WaveSpeedAI,在 Access Keys 中生成 API Key,使用上方显示的 model id 向 https://llm.wavespeed.ai/v1/chat/completions 发送请求。新账户可获得免费额度,用于试用 Lfm 2 24b A2b。