eqho-eval Documentation
CLI + backend for evaluating Eqho agents with promptfoo. Pulls live campaign config from the Eqho platform, assembles prompts the same way production does, and routes all LLM calls through a shared proxy — no local API keys required.
Quick links
- Getting Started — Install, authenticate, run your first eval in under 5 minutes
- CLI Reference — Every command, option, and flag
- Configuration —
promptfooconfig.yamlanatomy, providers, assertions - Workflows — Patterns for multi-model comparison, tool validation, safety testing, and more
- Model Providers — 160+ models available through the proxy
- Troubleshooting — Common issues and the
doctorcommand
How it works
Developer machine evals.eqho-solutions.dev (Vercel)
┌──────────────────┐ ┌────────────────────────────┐
│ eqho-eval CLI │───promptfoo───▶│ /api/v1/chat/completions │
│ │ │ ├─ OpenAI (direct) │
│ .env has JWT, │ │ ├─ Anthropic (gateway) │
│ not raw keys │ │ └─ 20+ more providers │
│ │ │ │
│ │ │ /api/eqho/* │
│ │ │ └─ Eqho API proxy │
└──────────────────┘ └────────────────────────────┘
When you run eqho-eval auth, the CLI registers with the backend and receives a JWT. All subsequent eval runs route through the proxy — OpenAI, Anthropic, Google, and dozens of other providers are available without configuring any API keys locally.
What makes eqho-eval different from raw promptfoo
| Feature | promptfoo alone | eqho-eval |
|---------|----------------|-----------|
| API keys | You manage per-provider keys | Proxy handles everything |
| Campaign data | Manual setup | Pulls live from Eqho |
| Prompt assembly | Write it yourself | Mirrors production PromptBuilder |
| Tool definitions | Manual JSON | Generated from Eqho actions |
| Model access | Configure each provider | 160+ models, one endpoint |
| Getting started | promptfoo init + config | eqho-eval start interactive wizard |
eqho-eval builds on top of promptfoo — everything in the promptfoo docs applies. This documentation focuses on what eqho-eval adds.