Prof Prom avatar
Prof Prom

Pricing

Pay per usage

Go to Apify Store
Prof Prom

Prof Prom

Professional Prompt generation

Pricing

Pay per usage

Rating

0.0

(0)

Developer

Sanat Jha

Sanat Jha

Maintained by Community

Actor stats

0

Bookmarked

1

Total users

0

Monthly active users

4 days ago

Last modified

Categories

Share

Professional Prompt Improver with Groq AI

An Apify Actor that takes your basic prompt and transforms it into a detailed, professional, and highly effective prompt optimized for your target AI model using Groq AI.

🌟 Features

  • Multi-Model Support: Optimizes prompts for popular AI models including:

    • GPT-4 & GPT-3.5 Turbo (OpenAI)
    • Claude 3 (Opus, Sonnet, Haiku) (Anthropic)
    • Gemini Pro & Ultra (Google)
    • Llama 3 (Meta)
    • Mixtral (Mistral AI)
  • Intelligent Optimization: Uses Groq AI to craft prompts that:

    • Maintain your original intent and goals
    • Add relevant context and background
    • Include clear, specific instructions
    • Define desired output format and structure
    • Leverage model-specific strengths
    • Follow best practices for each model
  • Model-Aware: Understands each model's characteristics:

    • Context window sizes
    • Strengths and capabilities
    • Best practices and optimization techniques

πŸš€ Quick Start

Running Locally

  1. Install dependencies:
$pip install -r requirements.txt
  1. Run the Actor:
$apify run

Deploy to Apify

  1. Log in to Apify:
$apify login
  1. Push to Apify Platform:
$apify push

πŸ“ Input

The Actor accepts the following input parameters:

FieldTypeRequiredDescription
user_promptstringYesYour original prompt that you want to improve
target_modelstringYesThe AI model you plan to use (e.g., "gpt-4", "claude-3-opus")
groq_api_keystringYesYour Groq API key (get one here)

Example Input

{
"user_prompt": "Write a blog post about AI",
"target_model": "gpt-4",
"groq_api_key": "gsk_..."
}

πŸ“€ Output

The Actor outputs a dataset with the following structure:

{
"original_prompt": "Write a blog post about AI",
"improved_prompt": "You are an expert technology writer specializing in artificial intelligence...",
"target_model": "gpt-4",
"model_characteristics": {
"strengths": "advanced reasoning, complex tasks, creative writing, code generation",
"context_window": "8K-32K tokens",
"best_practices": "Use clear instructions, provide examples, break complex tasks into steps"
},
"prompt_length": 1247,
"improvement_ratio": 45.6,
"timestamp": "2025-12-12T10:12:26.000Z"
}

🎯 Use Cases

  • Content Creation: Transform simple content requests into detailed briefs
  • Code Generation: Create comprehensive coding prompts with examples and constraints
  • Analysis Tasks: Craft detailed analytical prompts with specific frameworks
  • Creative Writing: Develop rich creative prompts with style guides and context
  • Research: Build thorough research prompts with methodology and scope

πŸ”§ How It Works

  1. Input Processing: Receives your basic prompt and target model selection
  2. Model Analysis: Identifies the characteristics and best practices for your chosen model
  3. Prompt Engineering: Uses Groq AI (Mixtral-8x7B) to craft an optimized prompt that:
    • Expands on your original idea
    • Adds relevant context and structure
    • Incorporates model-specific optimizations
    • Includes clear instructions and formatting
  4. Output Generation: Returns the improved prompt ready to use with your target model

πŸ“Š Supported Models

OpenAI

  • GPT-4: Advanced reasoning, complex tasks (8K-32K tokens)
  • GPT-3.5 Turbo: Fast responses, general tasks (4K-16K tokens)

Anthropic

  • Claude 3 Opus: Nuanced understanding, long-form content (200K tokens)
  • Claude 3 Sonnet: Balanced performance, creative tasks (200K tokens)
  • Claude 3 Haiku: Speed and efficiency (200K tokens)

Google

  • Gemini Pro: Multimodal understanding, reasoning (32K tokens)
  • Gemini Ultra: Advanced reasoning, complex problem-solving (32K tokens)

Meta

  • Llama 3 70B: Strong reasoning, code generation (8K tokens)
  • Llama 3 8B: Efficient, fast inference (8K tokens)

Mistral AI

  • Mixtral 8x7B: Mixture of experts, multilingual (32K tokens)

πŸ”‘ Getting a Groq API Key

  1. Visit console.groq.com
  2. Sign up or log in
  3. Navigate to API Keys section
  4. Create a new API key
  5. Copy and use it in the Actor input

πŸ“ Project Structure

.actor/
β”œβ”€β”€ actor.json # Actor configuration
β”œβ”€β”€ input_schema.json # Input validation schema
└── output_schema.json # Output data schema
src/
└── main.py # Main Actor logic
requirements.txt # Python dependencies
Dockerfile # Container definition

πŸ› οΈ Technical Details

  • Runtime: Python 3.11+
  • Framework: Apify SDK for Python
  • AI Provider: Groq AI (using Mixtral-8x7B-32768)
  • Temperature: 0.7 (balanced creativity and consistency)
  • Max Tokens: 2048 (detailed prompts)

πŸ“š Resources

πŸ’‘ Tips for Best Results

  1. Be Specific: Even basic prompts benefit from some specificity
  2. Choose the Right Model: Select the model that matches your task complexity
  3. Review Output: The improved prompt is a starting point - customize as needed
  4. Iterate: Run multiple times with different models to compare approaches
  5. Combine: Use improved prompts as templates for similar tasks

🀝 Support

πŸ“„ License

This Actor is provided as-is under the Apache 2.0 License.


Built with ❀️ using Apify and Groq AI