Meta-Models
Don’t pick a provider. Pick a strategy. FreeLLM exposes three meta-models that route across providers based on what you care about.
| Model | Strategy | Best For |
|---|---|---|
free | Round-robin across all available providers | Maximum uptime |
free-fast | Latency-priority: Groq → Cerebras → Gemini → NIM → Mistral | Real-time chatbots, low latency UIs |
free-smart | Capability-priority: Gemini → NIM → Groq → Mistral → Cerebras | Complex reasoning, longer context |
Direct provider targeting
You can also target a specific provider model directly:
groq/llama-3.3-70b-versatilegemini/gemini-2.5-flashmistral/mistral-small-latestcerebras/llama3.1-8bnim/meta/llama-3.3-70b-instructnim/nvidia/llama-3.1-nemotron-70b-instructnim/deepseek-ai/deepseek-r1When you target a specific provider model, FreeLLM still applies multi-key rotation, circuit breakers, and rate-limit tracking. You just lose the cross-provider failover.