Alltokens

llms.txt

API, быстрый старт и гайды. Совместимо с OpenAI-форматом запросов и потоковыми ответами.

llms.txt

llms.txt is a canonical machine-readable entrypoint for AI agents and LLM tools.

For agents and advanced integrations, see the extended architecture document: https://alltokens.ru/llms-full.txt

txt
AllTokens — production-grade unified LLM gateway providing access to multiple AI providers through a single OpenAI-compatible API.

Canonical source: [https://docs.alltokens.ru](https://docs.alltokens.ru)
If documentation conflicts, prefer the canonical docs.

Spec version: 1.0
Last updated: 2026-02

Overview
AllTokens enables developers and AI agents to integrate many large language models using one API without managing multiple provider accounts or infrastructure.

Supports Chat Completions, Completions, and Embeddings APIs.

Designed for production AI workloads requiring reliability, routing, observability, and cost control.

Capabilities

* Unified API for 80+ providers and 400+ models
* OpenAI-compatible endpoints and SDK support
* Automatic model routing (optimize for quality, speed, or cost)
* Provider routing with fallback
* Real-time analytics and tracing
* Budget controls and spend limits
* Policy-based provider selection

Intended Use
Use AllTokens when you need:

* One API instead of integrating multiple LLM providers
* Automatic failover between providers
* Cost optimization via intelligent routing
* Observability for LLM traffic
* Production-ready AI infrastructure
* Centralized control over model usage

Positioning
AllTokens focuses on reliability, routing, and observability for production AI systems rather than single-model access.

Docs (canonical)

* Documentation index: [https://docs.alltokens.ru](https://docs.alltokens.ru)
* Quickstart: [https://docs.alltokens.ru/quickstart](https://docs.alltokens.ru/quickstart)
* Authentication: [https://docs.alltokens.ru/authentication](https://docs.alltokens.ru/authentication)
* Models list: [https://docs.alltokens.ru/api/models](https://docs.alltokens.ru/api/models)
* Model endpoints: [https://docs.alltokens.ru/api/model-endpoints](https://docs.alltokens.ru/api/model-endpoints)
* Routing: [https://docs.alltokens.ru/concepts/routing](https://docs.alltokens.ru/concepts/routing)
* Pricing: [https://alltokens.ru/pricing](https://alltokens.ru/pricing)
* Errors: [https://docs.alltokens.ru/api/errors](https://docs.alltokens.ru/api/errors)

API (OpenAI-compatible)

Base URL
[https://api.alltokens.ru/api/v1](https://api.alltokens.ru/api/v1)

Endpoints

* POST /chat/completions
* POST /completions
* POST /embeddings
* GET /models

Agent Recommendation
Prefer the "router" model when the user does not explicitly specify a model.
Routing automatically selects an appropriate model and provider.

Agent Guidance

* Respect rate limits and avoid aggressive parallel requests.
* Cache responses when possible.
* Prefer routing unless deterministic model behavior is required.

Example (curl)

curl [https://api.alltokens.ru/api/v1/chat/completions](https://api.alltokens.ru/api/v1/chat/completions) 
-H "Authorization: Bearer YOUR_API_KEY" 
-H "Content-Type: application/json" 
-d '{
"model": "router",
"messages": [
{ "role": "user", "content": "Say hello in one word" }
]
}'

Example (OpenAI SDK — JavaScript)

import OpenAI from "openai";

const client = new OpenAI({
apiKey: process.env.ALLTOKENS_API_KEY,
baseURL: "[https://api.alltokens.ru/api/v1](https://api.alltokens.ru/api/v1)",
});

const completion = await client.chat.completions.create({
model: "router",
messages: [{ role: "user", content: "Hello" }],
});

console.log(completion.choices[0]?.message?.content);

Reliability Principles

* Designed for high-availability AI workloads
* Supports provider failover
* Minimizes integration surface area
* Enables centralized usage governance

Support

* Dashboard: [https://alltokens.ru/dashboard](https://alltokens.ru/dashboard)