AllTokens — production-grade unified LLM gateway providing access to multiple AI providers through a single OpenAI-compatible API. Canonical source: [https://alltokens.ru/docs](https://alltokens.ru/docs) If documentation conflicts, prefer the canonical docs. Spec version: 1.0 Last updated: 2026-02 Overview AllTokens enables developers and AI agents to integrate many large language models using one API without managing multiple provider accounts or infrastructure. Supports Chat Completions, Completions, and Embeddings APIs. Designed for production AI workloads requiring reliability, routing, observability, and cost control. Capabilities * Unified API for 80+ providers and 400+ models * OpenAI-compatible endpoints and SDK support * Automatic model routing (optimize for quality, speed, or cost) * Provider routing with fallback * Real-time analytics and tracing * Budget controls and spend limits * Policy-based provider selection Intended Use Use AllTokens when you need: * One API instead of integrating multiple LLM providers * Automatic failover between providers * Cost optimization via intelligent routing * Observability for LLM traffic * Production-ready AI infrastructure * Centralized control over model usage Positioning AllTokens focuses on reliability, routing, and observability for production AI systems rather than single-model access. Docs (canonical) * Documentation index: [https://alltokens.ru/docs](https://alltokens.ru/docs) * Quickstart: [https://alltokens.ru/docs/quickstart](https://alltokens.ru/docs/quickstart) * Authentication: [https://alltokens.ru/docs/authentication](https://alltokens.ru/docs/authentication) * Models list: [https://alltokens.ru/docs/models-list](https://alltokens.ru/docs/models-list) * Model endpoints: [https://alltokens.ru/docs/models-endpoints](https://alltokens.ru/docs/models-endpoints) * Routing: [https://alltokens.ru/docs/routing](https://alltokens.ru/docs/routing) * Pricing: [https://alltokens.ru/pricing](https://alltokens.ru/pricing) * Errors: [https://alltokens.ru/docs/errors](https://alltokens.ru/docs/errors) API (OpenAI-compatible) Base URL [https://api.alltokens.ru/api/v1](https://api.alltokens.ru/api/v1) Endpoints * POST /chat/completions * POST /completions * POST /embeddings * GET /models Agent Recommendation Prefer the "router" model when the user does not explicitly specify a model. Routing automatically selects an appropriate model and provider. Agent Guidance * Respect rate limits and avoid aggressive parallel requests. * Cache responses when possible. * Prefer routing unless deterministic model behavior is required. Example (curl) curl [https://api.alltokens.ru/api/v1/chat/completions](https://api.alltokens.ru/api/v1/chat/completions) -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{ "model": "router", "messages": [ { "role": "user", "content": "Say hello in one word" } ] }' Example (OpenAI SDK — JavaScript) import OpenAI from "openai"; const client = new OpenAI({ apiKey: process.env.ALLTOKENS_API_KEY, baseURL: "[https://api.alltokens.ru/api/v1](https://api.alltokens.ru/api/v1)", }); const completion = await client.chat.completions.create({ model: "router", messages: [{ role: "user", content: "Hello" }], }); console.log(completion.choices[0]?.message?.content); Reliability Principles * Designed for high-availability AI workloads * Supports provider failover * Minimizes integration surface area * Enables centralized usage governance Support * Dashboard: [https://alltokens.ru/dashboard](https://alltokens.ru/dashboard) Extended architecture document: https://alltokens.ru/llms-full.txt