Seedance 2.0 | Special Offer ✦ 10% OFF NOW | Ends May 13 (UTC+0)
L
liquid/lfm-2-24b-a2b

liquid/lfm-2-24b-a2b

32,768 context · $0.03/M input tokens · $0.12/M output tokens

LFM2-24B-A2B is the largest model in the LFM2 family of hybrid architectures designed for efficient on-device deployment. Built as a 24B parameter Mixture-of-Experts model with only 2B active parameters per token, it delivers high-quality generation while maintaining low inference costs. The model fits within 32 GB of RAM, making it practical to run on consumer laptops and desktops without sacrificing capability.

定价

按量付费

无需预付费用,仅按实际使用量付费

输入$0.03 / M Tokens
输出$0.12 / M Tokens

API 使用

使用以下代码示例接入我们的 API:

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://llm.wavespeed.ai/v1"
)

response = client.chat.completions.create(
    model="liquid/lfm-2-24b-a2b",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)

print(response.choices[0].message.content)

模型介绍

Liquid lfm-2-24b-a2b

LFM2-24B-A2B is the largest model in the LFM2 family of hybrid architectures designed for efficient on-device deployment

LFM2-24B-A2B is the largest model in the LFM2 family of hybrid architectures designed for efficient on-device deployment. Built as a 24B parameter Mixture-of-Experts model with only 2B active parameters per token, it delivers high-quality generation while maintaining low inference costs. The model fits within 32 GB of RAM, making it practical to run on consumer laptops and desktops without sacrificing capability.


Why It Looks Great

  • Large Language Model architecture for efficient processing
  • 32768 context window for long document handling
  • Competitive pricing at $0.0/$0.1 per million tokens

Key Features

  • Context Window: 32768 tokens
  • Max Output: N/A tokens
  • Vision: Supported
  • Function Calling: Supported

Specifications

SpecificationValue
ProviderLiquid
Model TypeLarge Language Model (LLM)
ArchitectureN/A
Context Window32768 tokens
Max Outputtokens
InputText
OutputText
VisionSupported
Function CallingSupported

Pricing

Token TypeCost per Million Tokens
Input$0.0
Output$0.1

How to Use

  1. Write your prompt — describe the task, provide context, and specify desired output format.
  2. Submit — the model processes your request and returns the response.

API Integration

Base URL: https://llm.wavespeed.ai/v1 API Endpoint: chat/completions Model ID: liquid/lfm-2-24b-a2b


API Usage

Python SDK

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://llm.wavespeed.ai/v1"
)

response = client.chat.completions.create(
    model="liquid/lfm-2-24b-a2b",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)

print(response.choices[0].message.content)

cURL

curl https://llm.wavespeed.ai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "liquid/lfm-2-24b-a2b",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Notes

  • Model: liquid/lfm-2-24b-a2b
  • Provider: Liquid

信息

提供商liquid
类型llm

支持功能

输入
文本
输出
文本
上下文32,768
最大输出-
视觉-
函数调用-

API 访问指南

Base URLhttps://llm.wavespeed.ai/v1
API 端点chat/completions
Model IDliquid/lfm-2-24b-a2b

Lfm 2 24b A2b API

liquid/lfm-2-24b-a2b

LFM2-24B-A2B is the largest model in the LFM2 family of hybrid architectures designed for efficient on-device deployment. Built as a 24B parameter Mixture-of-Experts model with only 2B active parameters per token, it delivers high-quality generation while maintaining low inference costs. The model fits within 32 GB of RAM, making it practical to run on consumer laptops and desktops without sacrificing capability.

输入

$0.03 /M

输出

$0.12 /M

上下文

33K

在 WaveSpeedAI 试用 Lfm 2 24b A2b

通过我们的统一 API 接入 Lfm 2 24b A2b — 兼容 OpenAI、无冷启动、透明计费。

关于 Lfm 2 24b A2b 的常见问题

Lfm 2 24b A2b API 多少钱?+

WaveSpeedAI 定价:输入每百万 token $0.03,输出每百万 token $0.12。Prompt 缓存和批处理单独计费,可显著降低长上下文、高重复任务的实际成本。

Lfm 2 24b A2b 的上下文窗口是多大?+

Lfm 2 24b A2b 单次请求最多支持 33K 上下文 token,输出最多 — token。

Lfm 2 24b A2b 是否兼容 OpenAI?+

是的。WaveSpeedAI 通过 https://llm.wavespeed.ai/v1 的 OpenAI 兼容端点提供 Lfm 2 24b A2b。把官方 OpenAI SDK 的 base URL 指向该地址,使用 WaveSpeedAI 的 API Key 即可,无需任何其他代码改动。

如何开始使用 Lfm 2 24b A2b?+

登录 WaveSpeedAI,在 Access Keys 中生成 API Key,使用上方显示的 model id 向 https://llm.wavespeed.ai/v1/chat/completions 发送请求。新账户可获得免费额度,用于试用 Lfm 2 24b A2b。

相关 LLM API