OpenRouter
Pricing
Pay per event + usage
OpenRouter
You can use any AI LLM model without accounts in AI providers. Use this Actor as a proxy for all requests. Use pay-per-event pricing to pay only for the real credit used.
OpenRouter Proxy
This Apify Actor creates a proxy for the OpenRouter API, allowing you to access multiple AI models through a unified OpenAI-compatible interface. All requests are charged to your account on the Apify platform on a pay-per-event basis.
What this Actor does
- Proxy access: Routes your API requests to OpenRouter's extensive collection of AI models
- OpenAI compatibility: Works seamlessly with the OpenAI SDK and any OpenAI-compatible client
- Transparent billing: Charges are applied to your Apify account at the same rates as OpenRouter
- Full feature support: Supports chat completions, embeddings, streaming, and image generation
- Multiple API formats: Supports OpenAI (
/chat/completions), Anthropic (/messages), and OpenAI Responses (/responses) formats - No API key management: Uses your Apify token for authentication - no need to manage separate OpenRouter API keys
- Standby mode: Runs in Standby mode with a static URL, like a standard web server
Supported endpoints
| Method | Endpoint | Description |
|---|---|---|
| POST | /api/v1/chat/completions | Chat completions (OpenAI format) |
| POST | /api/v1/messages | Messages (Anthropic format) |
| POST | /api/v1/responses | Responses (OpenAI Responses API) |
| POST | /api/v1/embeddings | Text embeddings |
| GET | /api/v1/models | List available models |
| GET | /api/v1/models/count | Model count |
| GET | /api/v1/models/user | User model preferences |
| GET | /api/v1/models/{author}/{slug}/endpoints | Model endpoints |
| GET | /api/v1/embeddings/models | Embedding models |
| GET | /api/v1/providers | Available providers |
| GET | /api/v1/endpoints/zdr | Zero-data-retention endpoints |
| GET | /api/v1/generation | Generation details |
For full API documentation, see the OpenRouter API docs.
Pricing
This Actor uses a pay-per-event pricing model on the Apify platform. You pay for the tokens used by the OpenRouter API. Free tier users pay 10x more than paying users and are limited to 2,048 tokens per response.
Pricing structure
- Event:
openrouter-api-usage - Paying users: Pay the exact OpenRouter cost (rounded up to nearest $0.00001)
Pricing examples
| OpenRouter cost | Calculation | Charged events | You pay | Markup factor |
|---|---|---|---|---|
| $0.00001212 | 0.00001212 / 0.00001 | 2 | $0.00002 | 1.65x |
| $0.0001 | 0.0001 / 0.00001 | 10 | $0.0001 | 1x (exact) |
| $0.001 | 0.001 / 0.00001 | 100 | $0.001 | 1x (exact) |
| $0.01 | 0.01 / 0.00001 | 1,000 | $0.01 | 1x (exact) |
Quick start
1. Install the OpenAI package
$npm install openai
2. Basic usage
import OpenAI from 'openai';const openai = new OpenAI({baseURL: 'https://openrouter.apify.actor/api/v1',apiKey: 'no-key-required-but-must-not-be-empty', // Any non-empty string works; do NOT use a real API key.defaultHeaders: {Authorization: `Bearer ${process.env.APIFY_TOKEN}`, // Apify token is loaded automatically in runtime},});async function main() {const completion = await openai.chat.completions.create({model: 'openrouter/auto',messages: [{role: 'user',content: 'What is the meaning of life?',},],});console.log(completion.choices[0].message);}await main();
3. Streaming responses
const stream = await openai.chat.completions.create({model: 'openrouter/auto',messages: [{role: 'user',content: 'Write a short story about a robot.',},],stream: true,});for await (const chunk of stream) {process.stdout.write(chunk.choices[0]?.delta?.content || '');}
4. Image generation
OpenRouter supports image generation through compatible models. Use the chat completions API with the modalities parameter:
const response = await openai.chat.completions.create({model: 'google/gemini-2.5-flash-image', // Vision-capable modelmessages: [{role: 'user',content: 'Generate an image of a cute baby sea otter',},],modalities: ['text', 'image'], // Enable image generation});// Access generated image from responseconsole.log(response.choices[0].message);
Note: OpenRouter doesn't support the traditional /images/generations endpoint. Image generation is done through compatible models using the chat completions API. Check available models with image capabilities on OpenRouter's models page.
Available models
This proxy supports all models available through OpenRouter, including models from OpenAI, Anthropic, Google, Meta, Mistral, and many more.
For a complete list, visit OpenRouter's models page or call GET /api/v1/models.
Authentication
The Actor uses your Apify token for authentication. In Actor environments on the Apify platform, APIFY_TOKEN is automatically available. For local development, you can:
- Set the environment variable:
export APIFY_TOKEN=your_token_here - Or pass it directly in the
Authorizationheader - Find your token in Apify Console
Support
For issues related to this Actor, please open an issue or contact the Actor developer on Apify Store.