LLM Usage & Cost Monitor
Pricing
from $0.01 / 1,000 results
LLM Usage & Cost Monitor
Track and monitor your AI costs in real-time. Log token usage, calculate costs, and get daily/monthly reports for OpenAI, Anthropic, Gemini & Mistral. First 1,000 events/month FREE.
Pricing
from $0.01 / 1,000 results
Rating
0.0
(0)
Developer

Mohan K
Actor stats
0
Bookmarked
1
Total users
0
Monthly active users
4 days ago
Last modified
Categories
Share
Apify Actor for tracking LLM usage, cost, and performance metrics from n8n, Dify, and Sim.ai workflows.
Pricing
Free Tier + Pay Per Event
| Usage | Price |
|---|---|
| First 1,000 events/month | FREE |
| Additional events | $0.001 each |
Examples
| Monthly Events | Cost |
|---|---|
| 500 | $0 (free tier) |
| 1,000 | $0 (free tier) |
| 2,000 | $1.00 |
| 5,000 | $4.00 |
| 10,000 | $9.00 |
Usage Transparency
Every response includes:
monthly_events_used- Your current month's event countfree_events_remaining- Events left in free tiercharged_for_this_event- Whether this event was billed
No surprises. No hidden fees.
Features
- Dynamic pricing: Fetches latest prices from LiteLLM (2400+ models)
- Tracks token usage and cost across OpenAI, Anthropic, Gemini, and Mistral
- Stores events in Apify Dataset
- Aggregates daily and monthly stats per project
- 24-hour price cache to minimize API calls
- Falls back to static pricing if fetch fails
Input
{"project_id": "customer-support-bot","environment": "production","agent_name": "Support Agent v2","workflow_id": "wf_abc123","step_name": "classify_intent","provider": "openai","model": "gpt-4o","input_tokens": 450,"output_tokens": 120,"latency_ms": 1850,"status": "success","error": null,"metadata": {"user_id": "usr_789","session_id": "sess_456"},"timestamp": "2025-01-11T14:32:10.000Z"}
Output
{"logged": true,"event_cost_usd": 0.001325,"total_project_cost_today": 4.582,"pricing_source": "dynamic (LiteLLM)","monthly_events_used": 42,"free_events_remaining": 958,"charged_for_this_event": false}
| Field | Description |
|---|---|
pricing_source | Whether dynamic pricing was used or static fallback |
monthly_events_used | Total events logged this month |
free_events_remaining | Events remaining in free tier (resets monthly) |
charged_for_this_event | true if this event incurred a $0.001 charge |
n8n Integration
HTTP Request node:
- Method: POST
- URL:
https://api.apify.com/v2/acts/YOUR_USERNAME~llm-usage-cost-monitor/runs?token=YOUR_TOKEN
Example using n8n expressions:
{"project_id": "{{ $workflow.name }}","environment": "production","agent_name": "{{ $node['AI Agent'].json.name }}","workflow_id": "{{ $execution.id }}","step_name": "{{ $node.name }}","provider": "openai","model": "{{ $node['OpenAI'].json.model }}","input_tokens": {{ $node['OpenAI'].json.usage.prompt_tokens }},"output_tokens": {{ $node['OpenAI'].json.usage.completion_tokens }},"latency_ms": {{ $node['OpenAI'].json.latency }},"status": "success","error": null,"metadata": {},"timestamp": "{{ $now.toISO() }}"}
Dify Integration
Add an HTTP Request node after your LLM node in your Dify workflow:
[LLM Node] → [HTTP Request] → [Continue...]
HTTP Request configuration:
- Method: POST
- URL:
https://api.apify.com/v2/acts/YOUR_USERNAME~llm-usage-cost-monitor/runs?token=YOUR_TOKEN - Body Type: JSON
Example body using Dify variables:
{"project_id": "{{sys.app_id}}","environment": "production","agent_name": "{{sys.workflow_id}}","workflow_id": "{{sys.conversation_id}}","step_name": "llm_call","provider": "openai","model": "{{llm.model}}","input_tokens": {{llm.usage.prompt_tokens}},"output_tokens": {{llm.usage.completion_tokens}},"latency_ms": {{llm.elapsed_time}},"status": "success","metadata": {"user_id": "{{sys.user_id}}"},"timestamp": "{{sys.datetime}}"}
Available Dify system variables:
| Variable | Description |
|---|---|
{{sys.app_id}} | App identifier |
{{sys.conversation_id}} | Conversation ID |
{{sys.workflow_id}} | Workflow ID |
{{sys.user_id}} | User identifier |
{{sys.datetime}} | Current timestamp |
Check your LLM node's output panel to find exact variable paths for token usage.
Sim.ai Integration
Add an API block after your Agent block in your Sim.ai workflow:
[Agent Block] → [API Block] → [Continue...]
API block configuration:
- Method: POST
- URL:
https://api.apify.com/v2/acts/YOUR_USERNAME~llm-usage-cost-monitor/runs?token=YOUR_TOKEN - Headers:
Content-Type: application/json
Example body using Sim.ai variables:
{"project_id": "<workflow.id>","environment": "production","agent_name": "<agent.name>","workflow_id": "<execution.id>","step_name": "agent_call","provider": "openai","model": "<agent.model>","input_tokens": <agent.tokens.prompt>,"output_tokens": <agent.tokens.completion>,"latency_ms": <agent.latency>,"status": "success","metadata": {"total_tokens": <agent.tokens.total>,"cost": <agent.cost>},"timestamp": "<execution.timestamp>"}
Available Sim.ai output variables from Agent block:
| Variable | Description |
|---|---|
<agent.tokens.prompt> | Input/prompt tokens |
<agent.tokens.completion> | Output/completion tokens |
<agent.tokens.total> | Total tokens used |
<agent.cost> | Estimated API cost |
<agent.model> | Model used |
<execution.id> | Workflow execution ID |
Check your Agent block's output panel to verify exact variable paths.
Storage
Dataset: Default dataset (all events)
Key-Value Store:
daily-stats-{project_id}monthly-stats-{project_id}OUTPUT
Supported Models
Dynamic pricing supports 2400+ models from LiteLLM's database including:
- OpenAI: GPT-4o, GPT-4.1, GPT-4 Turbo, GPT-3.5, o1, o3
- Anthropic: Claude 4.5, Claude 4, Claude 3.5, Claude 3 (all variants)
- Google: Gemini 3, 2.5, 2.0, 1.5 (Pro/Flash)
- Mistral: Large, Medium, Small, Nemo, Mixtral
- Others: Cohere, AI21, Together, Groq, Bedrock, Vertex AI, Azure, and more
Unknown models automatically use conservative fallback pricing.
How Pricing Works
- On first run: Fetches latest pricing from LiteLLM's GitHub
- Caches for 24 hours: Stored in Apify Key-Value Store
- Model lookup: Tries exact match, then provider prefix, then fuzzy match
- Fallback: Uses static pricing if dynamic fetch fails
Pricing data is community-maintained and updated regularly by LiteLLM.
Local Development
Install dependencies:
npm installnpm run build
Test the logic without Apify:
$npm test
Run locally with Apify storage:
$./run-local.sh
Or manually:
mkdir -p storage/key_value_stores/defaultcp .actor/INPUT.json storage/key_value_stores/default/INPUT.jsonnpm run start
Edit .actor/INPUT.json to test with different inputs.
Deployment
$apify push