DeepSeek MCP Actor
Pricing
from $40.00 / 1,000 results
DeepSeek MCP Actor
Access DeepSeek AI and 50+ FREE OpenRouter models via MCP protocol. Generate text, code, and creative content with configurable parameters. Supports batch processing, multiple templates, and streaming responses. Cost-effective AI integration for any workflow.
Pricing
from $40.00 / 1,000 results
Rating
0.0
(0)
Developer

Varun Chopra
Actor stats
0
Bookmarked
3
Total users
1
Monthly active users
2 days ago
Last modified
Categories
Share
DeepSeek MCP Actor for Apify
A production-ready Apify Actor that integrates DeepSeek AI models via the Model Context Protocol (MCP), enabling advanced natural language understanding, reasoning, and generation in your automation workflows.
๏ฟฝ FREE Models Available!
This Actor supports completely FREE AI models via OpenRouter! No credit card required.
Free Models Include:
- ๐ฆ Llama 3.1 8B - Meta's latest open model
- ๐ Mistral 7B - Fast and efficient
- ๐ Gemma 2 9B - Google's open model
- ๐ฌ Phi-3 Mini - Microsoft's compact powerhouse
- ๐ Qwen 2 7B - Alibaba's multilingual model
- ๐จ Zephyr 7B - Fine-tuned for helpfulness
- ๐ฌ OpenChat 7B - Optimized for conversations
๐ Features
- ๐ FREE Models via OpenRouter: Access powerful AI models at zero cost
- DeepSeek AI Integration: Direct, secure integration with DeepSeek's chat and coder models
- MCP Compatibility: Fully MCP-compliant request/response handling for seamless workflow integration
- Multiple Operation Modes: Single prompt, batch processing, and multi-turn conversations
- Prompt Templates: Pre-built templates for common tasks (summarization, classification, entity extraction, etc.)
- Batch Processing: Efficient concurrent processing with rate limiting and ordered results
- Robust Error Handling: Comprehensive retry logic with exponential backoff
- Flexible Configuration: Extensive customization options for model parameters
๐ Table of Contents
- Free Models with OpenRouter
- Installation
- Quick Start
- Configuration
- Operation Modes
- Prompt Templates
- Batch Processing
- API Reference
- Examples
- Error Handling
- Security
- Best Practices
Free Models with OpenRouter
Why OpenRouter?
OpenRouter provides access to many AI models through a single API, including several completely free models. This is perfect for:
- ๐งช Testing and prototyping
- ๐ Learning and experimentation
- ๐ก Personal projects
- ๐ Startups with limited budgets
Setup (2 minutes)
-
Get a FREE API Key:
- Go to openrouter.ai
- Sign up (no credit card required)
- Copy your API key from the dashboard
-
Configure Apify Actor:
- Add
OPENROUTER_API_KEYto Actor environment variables - Set
providerto"openrouter"in your input - Choose a free model (they have
:freesuffix)
- Add
Free Models Quick Reference
| Model | ID | Best For |
|---|---|---|
| Llama 3.1 8B | meta-llama/llama-3.1-8b-instruct:free | General tasks, reasoning |
| Mistral 7B | mistralai/mistral-7b-instruct:free | Fast responses, coding |
| Gemma 2 9B | google/gemma-2-9b-it:free | Balanced performance |
| Phi-3 Mini | microsoft/phi-3-mini-128k-instruct:free | Long context, efficiency |
| Qwen 2 7B | qwen/qwen-2-7b-instruct:free | Multilingual, Chinese |
| Zephyr 7B | huggingfaceh4/zephyr-7b-beta:free | Helpful, aligned |
| OpenChat 7B | openchat/openchat-7b:free | Conversations |
Example: Using Free Models
{"provider": "openrouter","mode": "single","prompt": "Explain quantum computing in simple terms.","model": "meta-llama/llama-3.1-8b-instruct:free","temperature": 0.7}
Free vs Paid Comparison
| Feature | Free (OpenRouter) | Paid (DeepSeek) |
|---|---|---|
| Cost | $0 | ~$0.14-$2.19/M tokens |
| Rate Limits | Lower | Higher |
| Model Quality | Good | Excellent |
| Context Length | 4K-128K | 64K-128K |
| Best For | Testing, learning | Production, quality |
Installation
From Apify Store
- Navigate to the DeepSeek MCP Actor in the Apify Store
- Click "Try for free" or add to your Actors
- Set your
DEEPSEEK_API_KEYin the Actor's environment variables
Local Development
# Clone the repositorygit clone <repository-url>cd deepseek-mcp-actor# Install dependenciesnpm install# Build TypeScriptnpm run build# Set environment variable (choose one provider)# For FREE models via OpenRouter:export OPENROUTER_API_KEY=your_openrouter_key_here# For DeepSeek (paid):export DEEPSEEK_API_KEY=your_api_key_here# Run locallynpm start
Quick Start
๐ FREE: Using OpenRouter Models
{"provider": "openrouter","mode": "single","prompt": "Explain the concept of machine learning in simple terms.","model": "meta-llama/llama-3.1-8b-instruct:free","temperature": 0.7}
Using DeepSeek (Paid)
{"provider": "deepseek","mode": "single","prompt": "Explain the concept of machine learning in simple terms.","model": "deepseek-chat","temperature": 0.7}
Using a Template
{"provider": "openrouter","mode": "single","template": "summarization","templateVariables": {"content": "Your long text content here...","maxLength": "100"}}
Batch Processing
{"mode": "batch","prompts": [{ "id": "1", "prompt": "Summarize this article..." },{ "id": "2", "prompt": "Classify this text..." },{ "id": "3", "prompt": "Extract entities from..." }],"batchConcurrency": 3}
Configuration
Required Setup
- API Key: Set
DEEPSEEK_API_KEYas an environment variable or Apify secret
Input Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
mode | string | "single" | Operation mode: single, batch, or conversation |
prompt | string | - | The prompt for single mode |
prompts | array | - | Array of prompts for batch mode |
conversationContext | array | - | Previous messages for conversation mode |
systemInstructions | string | - | System-level instructions |
template | string | - | Predefined template name |
templateVariables | object | - | Variables to inject into template |
customTemplate | string | - | Custom template with {{variable}} placeholders |
model | string | "deepseek-chat" | Model to use |
temperature | number | 0.7 | Creativity level (0-2) |
maxTokens | number | 2048 | Maximum response tokens |
topP | number | 0.95 | Nucleus sampling parameter |
frequencyPenalty | number | 0 | Frequency penalty (-2 to 2) |
presencePenalty | number | 0 | Presence penalty (-2 to 2) |
stopSequences | array | - | Stop generation sequences |
timeout | number | 60000 | Request timeout in ms |
batchConcurrency | number | 3 | Concurrent batch requests |
batchOrdered | boolean | true | Maintain input order |
retryCount | number | 3 | Number of retries |
retryDelayMs | number | 1000 | Initial retry delay |
outputFormat | string | "mcp" | Output format: text, json, or mcp |
includeUsageStats | boolean | true | Include token usage |
Available Models
deepseek-chat- General purpose conversational AIdeepseek-coder- Optimized for code generation and analysisdeepseek-reasoner- Enhanced reasoning capabilities
Operation Modes
Single Mode
Process a single prompt with optional system instructions.
{"mode": "single","prompt": "What are the benefits of renewable energy?","systemInstructions": "You are an environmental expert. Provide detailed, factual responses."}
Batch Mode
Process multiple prompts efficiently with concurrency control.
{"mode": "batch","prompts": [{"id": "article_1","prompt": "Summarize: {{content}}","variables": {"content": "First article text..."}},{"id": "article_2","prompt": "Summarize: {{content}}","variables": {"content": "Second article text..."}}],"batchConcurrency": 5,"batchOrdered": true}
Conversation Mode
Maintain context across multiple turns.
{"mode": "conversation","conversationContext": [{ "role": "user", "content": "Tell me about Python." },{ "role": "assistant", "content": "Python is a high-level programming language..." }],"prompt": "What are its main use cases?"}
Prompt Templates
Available Templates
| Template | Description | Required Variables |
|---|---|---|
summarization | Summarize content | content |
classification | Classify into categories | content, categories |
entity_extraction | Extract structured entities | content, entityTypes |
content_generation | Generate new content | topic |
reasoning | Logical analysis | content |
sentiment_analysis | Analyze emotional tone | content |
translation | Translate between languages | content, targetLanguage |
qa | Question answering | context, question |
custom | User-defined template | customTemplate |
Template Examples
Summarization
{"template": "summarization","templateVariables": {"content": "Long article text here...","contentType": "news article","maxLength": "150","focusAreas": "key findings and implications"}}
Classification
{"template": "classification","templateVariables": {"content": "Product review text...","categories": "positive, negative, neutral, mixed"}}
Entity Extraction
{"template": "entity_extraction","templateVariables": {"content": "John Smith, CEO of Acme Corp, announced...","entityTypes": "person names, organizations, dates, locations","outputFormat": "JSON"}}
Custom Template
{"template": "custom","customTemplate": "Analyze the following customer feedback and provide:\n1. Main concerns\n2. Positive aspects\n3. Improvement suggestions\n\nFeedback: {{feedback}}\n\n{{#if productName}}Product: {{productName}}{{/if}}","templateVariables": {"feedback": "Customer feedback text...","productName": "Widget Pro 3000"}}
Batch Processing
Configuration
{"mode": "batch","prompts": [...],"batchConcurrency": 3,"batchOrdered": true,"systemInstructions": "Apply to all items..."}
Per-Item Overrides
{"prompts": [{"id": "creative_task","prompt": "Write a poem about...","parameters": {"temperature": 0.9,"maxTokens": 500}},{"id": "factual_task","prompt": "List the steps to...","parameters": {"temperature": 0.2,"maxTokens": 1000}}]}
Batch Response Format
{"requestId": "uuid","status": "success","results": [{"id": "item_1","status": "success","prompt": "Original prompt...","content": "AI response...","usage": { "totalTokens": 150 },"durationMs": 1200},{"id": "item_2","status": "error","prompt": "Original prompt...","error": {"code": "RATE_LIMIT","message": "Rate limit exceeded","retryable": true}}],"usage": { "totalTokens": 500 },"timestamp": "2025-01-06T..."}
API Reference
Output Format
MCP Format (default)
{"requestId": "unique-uuid","status": "success","content": "AI response content...","usage": {"promptTokens": 50,"completionTokens": 150,"totalTokens": 200,"estimatedCostUsd": 0.000056},"metadata": {"model": "deepseek-chat","durationMs": 1500,"retryCount": 0,"apiResponseId": "chatcmpl-xxx"},"timestamp": "2025-01-06T12:00:00.000Z"}
JSON Format
{"success": true,"content": "AI response content...","usage": {"promptTokens": 50,"completionTokens": 150,"totalTokens": 200}}
Text Format
AI response content...
Examples
Use Case: Enrich Scraped Data
{"mode": "batch","prompts": [{"id": "product_1","prompt": "Based on this product description, generate: 1) A compelling headline 2) Three key selling points 3) Target audience\n\nDescription: {{description}}","variables": {"description": "Scraped product description..."}}],"systemInstructions": "You are a marketing expert. Generate concise, compelling copy.","temperature": 0.7}
Use Case: Content Classification Pipeline
{"mode": "batch","template": "classification","prompts": [{"id": "doc_1","prompt": "","variables": {"content": "Document content...","categories": "technology, finance, health, entertainment, sports"}}],"temperature": 0.1}
Use Case: Multi-turn Data Analysis
{"mode": "conversation","conversationContext": [{"role": "system","content": "You are a data analyst. Analyze the provided data and answer questions."},{"role": "user","content": "Here's our Q4 sales data: [data]"},{"role": "assistant","content": "I've analyzed the Q4 data. Key findings: ..."}],"prompt": "What factors contributed most to the December spike?","model": "deepseek-reasoner"}
Error Handling
Error Codes
| Code | Description | Retryable |
|---|---|---|
INVALID_INPUT | Invalid input parameters | No |
API_ERROR | DeepSeek API error | Yes |
RATE_LIMIT | Rate limit exceeded | Yes |
TIMEOUT | Request timed out | Yes |
NETWORK_ERROR | Network connection failed | Yes |
AUTH_ERROR | Authentication failed | No |
MODEL_ERROR | Invalid model specified | No |
TEMPLATE_ERROR | Template processing error | No |
BATCH_ERROR | Batch processing error | No |
Error Response
{"requestId": "uuid","status": "error","error": {"code": "RATE_LIMIT","message": "API rate limit exceeded","details": "Too many requests. Please wait before retrying.","retryable": true,"retryAfterMs": 60000},"timestamp": "2025-01-06T..."}
Retry Configuration
{"retryCount": 3,"retryDelayMs": 1000}
The Actor uses exponential backoff with:
- Initial delay:
retryDelayMs - Max delay: 30 seconds
- Backoff multiplier: 2x
Security
API Key Management
- Store API keys in Apify Secrets, never in input
- Set
DEEPSEEK_API_KEYas an environment variable - Keys are never logged or included in error messages
Data Privacy
- Input content is not logged
- Response content is not logged
- Only metadata and statistics are logged
Best Practices
- Use Apify Secrets for sensitive configuration
- Set appropriate timeouts for your use case
- Use batch processing for large datasets
- Monitor token usage to control costs
Best Practices
Performance
- Use batch mode for multiple prompts (more efficient than sequential single calls)
- Set appropriate
batchConcurrency(3-5 is usually optimal) - Enable
batchOrdered: falseif order doesn't matter for faster processing
Cost Optimization
- Use appropriate
maxTokensfor your use case - Lower temperature for deterministic tasks
- Use
deepseek-chatfor general tasks,deepseek-coderonly for code
Reliability
- Always set reasonable
timeoutvalues - Configure
retryCountfor production workloads - Handle
partial_successstatus in batch results
Prompt Engineering
- Use system instructions for consistent behavior
- Leverage templates for standardized tasks
- Include specific output format requirements in prompts
Chaining with Other Actors
The Actor outputs are designed for easy chaining:
// Example: Chain with web scraperconst scraperRun = await Apify.call('apify/web-scraper', scraperInput);const scrapedData = await scraperRun.dataset().getData();// Process with DeepSeekconst deepseekInput = {mode: 'batch',prompts: scrapedData.items.map((item, index) => ({id: `item_${index}`,prompt: `Summarize: ${item.text}`,})),};const enrichedRun = await Apify.call('your-username/deepseek-mcp-actor', deepseekInput);
Pricing & Cost Transparency
Apify Platform Costs
| Resource | Estimate |
|---|---|
| Memory | 256 MB minimum (configurable up to 4 GB) |
| Compute Units | ~0.004 CU/min at 256 MB |
| Single Run (typical) | ~0.01-0.05 CU |
| Batch Run (10 items) | ~0.05-0.20 CU |
Compute Unit Calculation: Memory (GB) ร Run Time (hours) ร 4
AI Provider Costs
FREE Tier (OpenRouter)
| Model | Cost | Best For |
|---|---|---|
| DeepSeek V3.1 Nex N1 | $0.00 | General purpose, 131K context |
| Llama 3.1/3.2 | $0.00 | General tasks, reasoning |
| Mistral 7B | $0.00 | Fast responses, coding |
| Gemma 2 9B | $0.00 | Balanced performance |
| Phi-3 Mini | $0.00 | Long context (128K) |
| Others | $0.00 | Various use cases |
Paid Tier (DeepSeek Direct)
| Model | Input | Output | 1M Tokens Est. |
|---|---|---|---|
| deepseek-chat | $0.14/1M | $0.28/1M | ~$0.21 |
| deepseek-coder | $0.14/1M | $0.28/1M | ~$0.21 |
| deepseek-reasoner | $0.55/1M | $2.19/1M | ~$1.37 |
Cost Examples
| Scenario | Apify CU | AI Cost | Total |
|---|---|---|---|
| 10 prompts (FREE model) | ~0.02 CU | $0.00 | ~$0.001 |
| 100 prompts (FREE model) | ~0.15 CU | $0.00 | ~$0.008 |
| 10 prompts (DeepSeek) | ~0.02 CU | ~$0.02 | ~$0.03 |
| 100 prompts (DeepSeek) | ~0.15 CU | ~$0.20 | ~$0.28 |
๐ก Tip: Use FREE models for development and testing, then switch to paid models for production quality if needed.
Support & Feedback
Getting Help
- ๐ Documentation: This README and Apify Docs
- ๐ Issues: Report bugs via the GitHub repository
- ๐ฌ Community: Apify Discord for community support
- ๐ง DeepSeek API: https://platform.deepseek.com/docs
- ๐ OpenRouter: https://openrouter.ai/docs
Providing Feedback
We value your feedback! Please help us improve:
- โญ Rate this Actor on the Apify Store
- ๐ฌ Leave a review describing your use case
- ๐ Report issues via GitHub with reproduction steps
- ๐ก Suggest features via GitHub discussions
Common Issues & Solutions
| Issue | Solution |
|---|---|
| "No API key found" | Set OPENROUTER_API_KEY in Actor environment variables |
| Rate limit errors | Reduce batchConcurrency, add delays between runs |
| Timeout errors | Increase timeout setting, reduce maxTokens |
| Empty responses | Check prompt quality, verify model availability |
Changelog
See CHANGELOG.md for version history and updates.
License
Apache 2.0 - See LICENSE file for details.
