DeepSeek MCP Actor avatar
DeepSeek MCP Actor

Pricing

from $40.00 / 1,000 results

Go to Apify Store
DeepSeek MCP Actor

DeepSeek MCP Actor

Access DeepSeek AI and 50+ FREE OpenRouter models via MCP protocol. Generate text, code, and creative content with configurable parameters. Supports batch processing, multiple templates, and streaming responses. Cost-effective AI integration for any workflow.

Pricing

from $40.00 / 1,000 results

Rating

0.0

(0)

Developer

Varun Chopra

Varun Chopra

Maintained by Community

Actor stats

0

Bookmarked

3

Total users

1

Monthly active users

2 days ago

Last modified

Share

DeepSeek MCP Actor for Apify

A production-ready Apify Actor that integrates DeepSeek AI models via the Model Context Protocol (MCP), enabling advanced natural language understanding, reasoning, and generation in your automation workflows.

๏ฟฝ FREE Models Available!

This Actor supports completely FREE AI models via OpenRouter! No credit card required.

Free Models Include:

  • ๐Ÿฆ™ Llama 3.1 8B - Meta's latest open model
  • ๐ŸŒ€ Mistral 7B - Fast and efficient
  • ๐Ÿ’Ž Gemma 2 9B - Google's open model
  • ๐Ÿ”ฌ Phi-3 Mini - Microsoft's compact powerhouse
  • ๐ŸŒ Qwen 2 7B - Alibaba's multilingual model
  • ๐Ÿ’จ Zephyr 7B - Fine-tuned for helpfulness
  • ๐Ÿ’ฌ OpenChat 7B - Optimized for conversations

Jump to Free Models Setup โ†’

๐Ÿš€ Features

  • ๐Ÿ†“ FREE Models via OpenRouter: Access powerful AI models at zero cost
  • DeepSeek AI Integration: Direct, secure integration with DeepSeek's chat and coder models
  • MCP Compatibility: Fully MCP-compliant request/response handling for seamless workflow integration
  • Multiple Operation Modes: Single prompt, batch processing, and multi-turn conversations
  • Prompt Templates: Pre-built templates for common tasks (summarization, classification, entity extraction, etc.)
  • Batch Processing: Efficient concurrent processing with rate limiting and ordered results
  • Robust Error Handling: Comprehensive retry logic with exponential backoff
  • Flexible Configuration: Extensive customization options for model parameters

๐Ÿ“‹ Table of Contents

Free Models with OpenRouter

Why OpenRouter?

OpenRouter provides access to many AI models through a single API, including several completely free models. This is perfect for:

  • ๐Ÿงช Testing and prototyping
  • ๐Ÿ“š Learning and experimentation
  • ๐Ÿ’ก Personal projects
  • ๐Ÿš€ Startups with limited budgets

Setup (2 minutes)

  1. Get a FREE API Key:

    • Go to openrouter.ai
    • Sign up (no credit card required)
    • Copy your API key from the dashboard
  2. Configure Apify Actor:

    • Add OPENROUTER_API_KEY to Actor environment variables
    • Set provider to "openrouter" in your input
    • Choose a free model (they have :free suffix)

Free Models Quick Reference

ModelIDBest For
Llama 3.1 8Bmeta-llama/llama-3.1-8b-instruct:freeGeneral tasks, reasoning
Mistral 7Bmistralai/mistral-7b-instruct:freeFast responses, coding
Gemma 2 9Bgoogle/gemma-2-9b-it:freeBalanced performance
Phi-3 Minimicrosoft/phi-3-mini-128k-instruct:freeLong context, efficiency
Qwen 2 7Bqwen/qwen-2-7b-instruct:freeMultilingual, Chinese
Zephyr 7Bhuggingfaceh4/zephyr-7b-beta:freeHelpful, aligned
OpenChat 7Bopenchat/openchat-7b:freeConversations

Example: Using Free Models

{
"provider": "openrouter",
"mode": "single",
"prompt": "Explain quantum computing in simple terms.",
"model": "meta-llama/llama-3.1-8b-instruct:free",
"temperature": 0.7
}

Free vs Paid Comparison

FeatureFree (OpenRouter)Paid (DeepSeek)
Cost$0~$0.14-$2.19/M tokens
Rate LimitsLowerHigher
Model QualityGoodExcellent
Context Length4K-128K64K-128K
Best ForTesting, learningProduction, quality

Installation

From Apify Store

  1. Navigate to the DeepSeek MCP Actor in the Apify Store
  2. Click "Try for free" or add to your Actors
  3. Set your DEEPSEEK_API_KEY in the Actor's environment variables

Local Development

# Clone the repository
git clone <repository-url>
cd deepseek-mcp-actor
# Install dependencies
npm install
# Build TypeScript
npm run build
# Set environment variable (choose one provider)
# For FREE models via OpenRouter:
export OPENROUTER_API_KEY=your_openrouter_key_here
# For DeepSeek (paid):
export DEEPSEEK_API_KEY=your_api_key_here
# Run locally
npm start

Quick Start

๐Ÿ†“ FREE: Using OpenRouter Models

{
"provider": "openrouter",
"mode": "single",
"prompt": "Explain the concept of machine learning in simple terms.",
"model": "meta-llama/llama-3.1-8b-instruct:free",
"temperature": 0.7
}

Using DeepSeek (Paid)

{
"provider": "deepseek",
"mode": "single",
"prompt": "Explain the concept of machine learning in simple terms.",
"model": "deepseek-chat",
"temperature": 0.7
}

Using a Template

{
"provider": "openrouter",
"mode": "single",
"template": "summarization",
"templateVariables": {
"content": "Your long text content here...",
"maxLength": "100"
}
}

Batch Processing

{
"mode": "batch",
"prompts": [
{ "id": "1", "prompt": "Summarize this article..." },
{ "id": "2", "prompt": "Classify this text..." },
{ "id": "3", "prompt": "Extract entities from..." }
],
"batchConcurrency": 3
}

Configuration

Required Setup

  1. API Key: Set DEEPSEEK_API_KEY as an environment variable or Apify secret

Input Parameters

ParameterTypeDefaultDescription
modestring"single"Operation mode: single, batch, or conversation
promptstring-The prompt for single mode
promptsarray-Array of prompts for batch mode
conversationContextarray-Previous messages for conversation mode
systemInstructionsstring-System-level instructions
templatestring-Predefined template name
templateVariablesobject-Variables to inject into template
customTemplatestring-Custom template with {{variable}} placeholders
modelstring"deepseek-chat"Model to use
temperaturenumber0.7Creativity level (0-2)
maxTokensnumber2048Maximum response tokens
topPnumber0.95Nucleus sampling parameter
frequencyPenaltynumber0Frequency penalty (-2 to 2)
presencePenaltynumber0Presence penalty (-2 to 2)
stopSequencesarray-Stop generation sequences
timeoutnumber60000Request timeout in ms
batchConcurrencynumber3Concurrent batch requests
batchOrderedbooleantrueMaintain input order
retryCountnumber3Number of retries
retryDelayMsnumber1000Initial retry delay
outputFormatstring"mcp"Output format: text, json, or mcp
includeUsageStatsbooleantrueInclude token usage

Available Models

  • deepseek-chat - General purpose conversational AI
  • deepseek-coder - Optimized for code generation and analysis
  • deepseek-reasoner - Enhanced reasoning capabilities

Operation Modes

Single Mode

Process a single prompt with optional system instructions.

{
"mode": "single",
"prompt": "What are the benefits of renewable energy?",
"systemInstructions": "You are an environmental expert. Provide detailed, factual responses."
}

Batch Mode

Process multiple prompts efficiently with concurrency control.

{
"mode": "batch",
"prompts": [
{
"id": "article_1",
"prompt": "Summarize: {{content}}",
"variables": {
"content": "First article text..."
}
},
{
"id": "article_2",
"prompt": "Summarize: {{content}}",
"variables": {
"content": "Second article text..."
}
}
],
"batchConcurrency": 5,
"batchOrdered": true
}

Conversation Mode

Maintain context across multiple turns.

{
"mode": "conversation",
"conversationContext": [
{ "role": "user", "content": "Tell me about Python." },
{ "role": "assistant", "content": "Python is a high-level programming language..." }
],
"prompt": "What are its main use cases?"
}

Prompt Templates

Available Templates

TemplateDescriptionRequired Variables
summarizationSummarize contentcontent
classificationClassify into categoriescontent, categories
entity_extractionExtract structured entitiescontent, entityTypes
content_generationGenerate new contenttopic
reasoningLogical analysiscontent
sentiment_analysisAnalyze emotional tonecontent
translationTranslate between languagescontent, targetLanguage
qaQuestion answeringcontext, question
customUser-defined templatecustomTemplate

Template Examples

Summarization

{
"template": "summarization",
"templateVariables": {
"content": "Long article text here...",
"contentType": "news article",
"maxLength": "150",
"focusAreas": "key findings and implications"
}
}

Classification

{
"template": "classification",
"templateVariables": {
"content": "Product review text...",
"categories": "positive, negative, neutral, mixed"
}
}

Entity Extraction

{
"template": "entity_extraction",
"templateVariables": {
"content": "John Smith, CEO of Acme Corp, announced...",
"entityTypes": "person names, organizations, dates, locations",
"outputFormat": "JSON"
}
}

Custom Template

{
"template": "custom",
"customTemplate": "Analyze the following customer feedback and provide:\n1. Main concerns\n2. Positive aspects\n3. Improvement suggestions\n\nFeedback: {{feedback}}\n\n{{#if productName}}Product: {{productName}}{{/if}}",
"templateVariables": {
"feedback": "Customer feedback text...",
"productName": "Widget Pro 3000"
}
}

Batch Processing

Configuration

{
"mode": "batch",
"prompts": [...],
"batchConcurrency": 3,
"batchOrdered": true,
"systemInstructions": "Apply to all items..."
}

Per-Item Overrides

{
"prompts": [
{
"id": "creative_task",
"prompt": "Write a poem about...",
"parameters": {
"temperature": 0.9,
"maxTokens": 500
}
},
{
"id": "factual_task",
"prompt": "List the steps to...",
"parameters": {
"temperature": 0.2,
"maxTokens": 1000
}
}
]
}

Batch Response Format

{
"requestId": "uuid",
"status": "success",
"results": [
{
"id": "item_1",
"status": "success",
"prompt": "Original prompt...",
"content": "AI response...",
"usage": { "totalTokens": 150 },
"durationMs": 1200
},
{
"id": "item_2",
"status": "error",
"prompt": "Original prompt...",
"error": {
"code": "RATE_LIMIT",
"message": "Rate limit exceeded",
"retryable": true
}
}
],
"usage": { "totalTokens": 500 },
"timestamp": "2025-01-06T..."
}

API Reference

Output Format

MCP Format (default)

{
"requestId": "unique-uuid",
"status": "success",
"content": "AI response content...",
"usage": {
"promptTokens": 50,
"completionTokens": 150,
"totalTokens": 200,
"estimatedCostUsd": 0.000056
},
"metadata": {
"model": "deepseek-chat",
"durationMs": 1500,
"retryCount": 0,
"apiResponseId": "chatcmpl-xxx"
},
"timestamp": "2025-01-06T12:00:00.000Z"
}

JSON Format

{
"success": true,
"content": "AI response content...",
"usage": {
"promptTokens": 50,
"completionTokens": 150,
"totalTokens": 200
}
}

Text Format

AI response content...

Examples

Use Case: Enrich Scraped Data

{
"mode": "batch",
"prompts": [
{
"id": "product_1",
"prompt": "Based on this product description, generate: 1) A compelling headline 2) Three key selling points 3) Target audience\n\nDescription: {{description}}",
"variables": {
"description": "Scraped product description..."
}
}
],
"systemInstructions": "You are a marketing expert. Generate concise, compelling copy.",
"temperature": 0.7
}

Use Case: Content Classification Pipeline

{
"mode": "batch",
"template": "classification",
"prompts": [
{
"id": "doc_1",
"prompt": "",
"variables": {
"content": "Document content...",
"categories": "technology, finance, health, entertainment, sports"
}
}
],
"temperature": 0.1
}

Use Case: Multi-turn Data Analysis

{
"mode": "conversation",
"conversationContext": [
{
"role": "system",
"content": "You are a data analyst. Analyze the provided data and answer questions."
},
{
"role": "user",
"content": "Here's our Q4 sales data: [data]"
},
{
"role": "assistant",
"content": "I've analyzed the Q4 data. Key findings: ..."
}
],
"prompt": "What factors contributed most to the December spike?",
"model": "deepseek-reasoner"
}

Error Handling

Error Codes

CodeDescriptionRetryable
INVALID_INPUTInvalid input parametersNo
API_ERRORDeepSeek API errorYes
RATE_LIMITRate limit exceededYes
TIMEOUTRequest timed outYes
NETWORK_ERRORNetwork connection failedYes
AUTH_ERRORAuthentication failedNo
MODEL_ERRORInvalid model specifiedNo
TEMPLATE_ERRORTemplate processing errorNo
BATCH_ERRORBatch processing errorNo

Error Response

{
"requestId": "uuid",
"status": "error",
"error": {
"code": "RATE_LIMIT",
"message": "API rate limit exceeded",
"details": "Too many requests. Please wait before retrying.",
"retryable": true,
"retryAfterMs": 60000
},
"timestamp": "2025-01-06T..."
}

Retry Configuration

{
"retryCount": 3,
"retryDelayMs": 1000
}

The Actor uses exponential backoff with:

  • Initial delay: retryDelayMs
  • Max delay: 30 seconds
  • Backoff multiplier: 2x

Security

API Key Management

  • Store API keys in Apify Secrets, never in input
  • Set DEEPSEEK_API_KEY as an environment variable
  • Keys are never logged or included in error messages

Data Privacy

  • Input content is not logged
  • Response content is not logged
  • Only metadata and statistics are logged

Best Practices

  1. Use Apify Secrets for sensitive configuration
  2. Set appropriate timeouts for your use case
  3. Use batch processing for large datasets
  4. Monitor token usage to control costs

Best Practices

Performance

  • Use batch mode for multiple prompts (more efficient than sequential single calls)
  • Set appropriate batchConcurrency (3-5 is usually optimal)
  • Enable batchOrdered: false if order doesn't matter for faster processing

Cost Optimization

  • Use appropriate maxTokens for your use case
  • Lower temperature for deterministic tasks
  • Use deepseek-chat for general tasks, deepseek-coder only for code

Reliability

  • Always set reasonable timeout values
  • Configure retryCount for production workloads
  • Handle partial_success status in batch results

Prompt Engineering

  • Use system instructions for consistent behavior
  • Leverage templates for standardized tasks
  • Include specific output format requirements in prompts

Chaining with Other Actors

The Actor outputs are designed for easy chaining:

// Example: Chain with web scraper
const scraperRun = await Apify.call('apify/web-scraper', scraperInput);
const scrapedData = await scraperRun.dataset().getData();
// Process with DeepSeek
const deepseekInput = {
mode: 'batch',
prompts: scrapedData.items.map((item, index) => ({
id: `item_${index}`,
prompt: `Summarize: ${item.text}`,
})),
};
const enrichedRun = await Apify.call('your-username/deepseek-mcp-actor', deepseekInput);

Pricing & Cost Transparency

Apify Platform Costs

ResourceEstimate
Memory256 MB minimum (configurable up to 4 GB)
Compute Units~0.004 CU/min at 256 MB
Single Run (typical)~0.01-0.05 CU
Batch Run (10 items)~0.05-0.20 CU

Compute Unit Calculation: Memory (GB) ร— Run Time (hours) ร— 4

AI Provider Costs

FREE Tier (OpenRouter)

ModelCostBest For
DeepSeek V3.1 Nex N1$0.00General purpose, 131K context
Llama 3.1/3.2$0.00General tasks, reasoning
Mistral 7B$0.00Fast responses, coding
Gemma 2 9B$0.00Balanced performance
Phi-3 Mini$0.00Long context (128K)
Others$0.00Various use cases
ModelInputOutput1M Tokens Est.
deepseek-chat$0.14/1M$0.28/1M~$0.21
deepseek-coder$0.14/1M$0.28/1M~$0.21
deepseek-reasoner$0.55/1M$2.19/1M~$1.37

Cost Examples

ScenarioApify CUAI CostTotal
10 prompts (FREE model)~0.02 CU$0.00~$0.001
100 prompts (FREE model)~0.15 CU$0.00~$0.008
10 prompts (DeepSeek)~0.02 CU~$0.02~$0.03
100 prompts (DeepSeek)~0.15 CU~$0.20~$0.28

๐Ÿ’ก Tip: Use FREE models for development and testing, then switch to paid models for production quality if needed.

Support & Feedback

Getting Help

Providing Feedback

We value your feedback! Please help us improve:

  1. โญ Rate this Actor on the Apify Store
  2. ๐Ÿ’ฌ Leave a review describing your use case
  3. ๐Ÿ“ Report issues via GitHub with reproduction steps
  4. ๐Ÿ’ก Suggest features via GitHub discussions

Common Issues & Solutions

IssueSolution
"No API key found"Set OPENROUTER_API_KEY in Actor environment variables
Rate limit errorsReduce batchConcurrency, add delays between runs
Timeout errorsIncrease timeout setting, reduce maxTokens
Empty responsesCheck prompt quality, verify model availability

Changelog

See CHANGELOG.md for version history and updates.

License

Apache 2.0 - See LICENSE file for details.