Model Providers
eqho-eval gives you access to 160+ LLMs from 24+ providers through a single OpenAI-compatible endpoint. No per-provider API keys needed — the backend proxy handles authentication and routing.
How it works
All models are accessed using the openai:chat: prefix in your config. The proxy examines the model name and routes to the correct provider:
- OpenAI models (
gpt-4.1,o4-mini, etc.) — direct passthrough toapi.openai.comwith full tool support - Everything else (
anthropic/...,google/...,xai/..., etc.) — routed through the Vercel AI Gateway
Discovering available models
From the CLI
eqho-eval providers list
Filter by provider or capability:
eqho-eval providers list --provider anthropic
eqho-eval providers list --tools-only
From the API
GET https://evals.eqho-solutions.dev/api/v1/models
Returns all available models with capabilities, context windows, and provider grouping.
Popular providers and models
OpenAI
- id: openai:chat:gpt-4.1
- id: openai:chat:gpt-4.1-mini
- id: openai:chat:gpt-4.1-nano
- id: openai:chat:o4-mini
- id: openai:chat:o3-mini
Full tool calling, streaming, and response format support.
Anthropic
- id: openai:chat:anthropic/claude-sonnet-4-20250514
- id: openai:chat:anthropic/claude-4-opus-20250514
- id: openai:chat:anthropic/claude-3.5-haiku-20241022
- id: openai:chat:google/gemini-2.5-pro
- id: openai:chat:google/gemini-2.5-flash
xAI
- id: openai:chat:xai/grok-3
- id: openai:chat:xai/grok-3-mini
DeepSeek
- id: openai:chat:deepseek/deepseek-v3
- id: openai:chat:deepseek/deepseek-r1
Meta (Llama)
- id: openai:chat:meta/llama-4-maverick
- id: openai:chat:meta/llama-4-scout
Mistral
- id: openai:chat:mistral/mistral-large-latest
- id: openai:chat:mistral/codestral-latest
Cohere
- id: openai:chat:cohere/command-a-03-2025
Amazon (Nova)
- id: openai:chat:amazon/nova-pro
Perplexity
- id: openai:chat:perplexity/sonar-pro
And more...
Alibaba (Qwen), Nvidia (Nemotron), Groq, Moonshot, Minimax, AI21 (Jamba), Microsoft (Phi), and others are all available. Run eqho-eval providers list for the complete, up-to-date list.
Using models in your config
Single model
providers:
- id: openai:chat:gpt-4.1
label: GPT-4.1
config:
temperature: 0.7
tools: file://tools/agent.json
Multiple models for comparison
providers:
- id: openai:chat:gpt-4.1
label: GPT-4.1
config:
temperature: 0.7
tools: file://tools/agent.json
- id: openai:chat:anthropic/claude-sonnet-4-20250514
label: Claude Sonnet
- id: openai:chat:google/gemini-2.5-pro
label: Gemini 2.5
- id: openai:chat:xai/grok-3
label: Grok 3
The first provider's
configblock (temperature, tools, etc.) is inherited by subsequent providers unless overridden.
Provider-specific settings
providers:
- id: openai:chat:gpt-4.1
label: GPT-4.1
config:
temperature: 0.7
max_tokens: 1024
- id: openai:chat:anthropic/claude-sonnet-4-20250514
label: Claude Sonnet
config:
temperature: 0.5
max_tokens: 2048
Tool calling support
Not all models support tool calling. The proxy translates OpenAI tool call format for non-OpenAI providers via the Vercel AI SDK.
Check which models support tools:
eqho-eval providers list --tools-only
For models without tool calling support, text-based evaluations still work — you just can't use is-valid-openai-tools-call or tool-call-f1 assertions.
Pre-flight connectivity checks
Before running evals, eqho-eval eval runs a provider connectivity check to verify all configured models are reachable. Skip this with --skip-preflight.
The eqho-eval doctor command also includes provider checks when a promptfooconfig.yaml exists in the current directory.