Prompt Engineering Helper avatar
Prompt Engineering Helper

Pricing

from $0.01 / 1,000 results

Go to Apify Store
Prompt Engineering Helper

Prompt Engineering Helper

Transform basic prompts into optimized LLM prompts using 12 research-proven templates. Works with ChatGPT, Claude, GPT-4, and any LLM.

Pricing

from $0.01 / 1,000 results

Rating

0.0

(0)

Developer

LIAICHI MUSTAPHA

LIAICHI MUSTAPHA

Maintained by Community

Actor stats

0

Bookmarked

1

Total users

0

Monthly active users

2 days ago

Last modified

Share

Prompt Engineering Templates

Tired of Writing Bad Prompts?

You know ChatGPT can do amazing things. You've seen others get incredible results. But your prompts? They get mediocre responses.

The problem isn't the AI. It's how you're asking.

This actor automatically wraps your prompts in research-proven templates used by prompt engineering experts. No more remembering techniques. No more inconsistent results. Just paste your question, pick a template, and get a prompt that actually works.


What It Does

Input: Your basic prompt
Output: Optimized prompt ready for ChatGPT, Claude, GPT-4, or any LLM

Example

Before:

How do I fix a memory leak in my Python app?

After (Chain of Thought):

How do I fix a memory leak in my Python app?
Let's work this out step by step to ensure we have the right answer:
1. First, let's break down the problem
2. Then, let's consider each component
3. Finally, let's arrive at a solution
Think through this carefully and show your reasoning.

Result: Instead of a generic answer, the LLM gives you systematic, step-by-step debugging.


12 Templates Inside

Each template is based on peer-reviewed research (Wei et al. 2022, Kojima et al. 2022, etc.) and optimized for specific tasks.

1. Zero-Shot - Simple & Direct

Best for straightforward questions that need clear answers.

2. Few-Shot Learning - With Examples

Best for teaching the model a specific pattern or style with 3 worked examples.

Best for complex problems, debugging, analysis. Adds step-by-step reasoning structure.

4. Zero-Shot CoT - Quick Reasoning

Best for fast problems. Just adds "Let's think step by step."

5. Role-Based - Expert Persona

Best for specialized advice. Model becomes an expert (15+ years experience).

6. Structured Output - Formatted Response

Best for parseable responses. Gets organized sections, code blocks, bullet points.

7. Emotional Stimulus - High Priority

Best for critical tasks. Adds urgency for better quality (research-proven!).

8. Step-Back - Conceptual First

Best for learning. Explains fundamentals before diving into specifics.

9. Self-Consistency - Multiple Approaches

Best for important decisions. Generates 3 different reasoning paths.

10. Problem Decomposition - Break It Down

Best for very complex problems. Breaks into sub-problems and solves each.

11. Metacognitive - Explain Reasoning

Best for transparency. Model explains what's relevant and why.

12. Comparative Analysis - Compare Options

Best for choosing between alternatives. Structured pros/cons comparison.


How to Use

Input

{
"user_prompt": "Your question here",
"template_type": "chain_of_thought",
"custom_context": "Any additional context (optional)",
"output_format": "markdown"
}

Parameters

ParameterRequiredDescription
user_prompt✅ YesYour question or prompt
template_typeNoWhich template (default: chain_of_thought)
custom_contextNoExtra context like tech stack, constraints
output_formatNotext (JSON) or markdown (default: text)

Template Options

Use one of these for template_type:

  • zero_shot
  • few_shot
  • chain_of_thought ⭐ (recommended)
  • zero_shot_cot
  • role_based
  • structured_output
  • emotional_stimulus
  • step_back
  • self_consistency
  • problem_decomposition
  • metacognitive
  • comparative_analysis

Output

You get:

  • enhanced_prompt - Copy this and paste into your LLM
  • original_prompt - Your input for reference
  • template_name - Which template was used
  • character_count / word_count - Stats
  • usage_instructions - How to use it

Real Examples

Debugging Code

{
"user_prompt": "My React app crashes when users log in",
"template_type": "problem_decomposition",
"custom_context": "Using JWT, Redux, Auth0"
}

Writing Content

{
"user_prompt": "Write an intro for my blog about AI ethics",
"template_type": "structured_output"
}

Making Decisions

{
"user_prompt": "MongoDB vs PostgreSQL for my SaaS?",
"template_type": "comparative_analysis",
"custom_context": "10k users expected, team of 8"
}

Learning Concepts

{
"user_prompt": "Explain how neural networks work",
"template_type": "step_back"
}

Which Template Should I Use?

Simple question?zero_shot
Need examples?few_shot
Complex problem?chain_of_thought
Expert advice?role_based
Specific format?structured_output
Critical task?emotional_stimulus or self_consistency
Learning something?step_back
Very complex?problem_decomposition

Not sure? Use chain_of_thought - it works for 80% of cases.


Works With Everything

✅ ChatGPT (GPT-3.5, GPT-4, GPT-4o)
✅ Claude (Opus, Sonnet, Haiku)
✅ Google Gemini
✅ Meta Llama
✅ Mistral
✅ Any other LLM

No API calls made. This actor just wraps your prompt in a template. You paste the result into your LLM.


Use in Workflows

n8n

Trigger → Apify (this actor) → OpenAI → Process result

Zapier

Slack mention → Apify → Claude → Reply in thread

Make

Webhook → Apify → GPT-4 → Email response

API

const client = new Apify.ApifyClient({ token: 'YOUR_TOKEN' });
const run = await client.actor('muliaichi/prompt-engineering-templates').call({
user_prompt: "How do I optimize my database?",
template_type: "chain_of_thought"
});
const enhancedPrompt = run.output.enhanced_prompt;
// Use with OpenAI, Claude, etc.

Why This Works

Based on research from:

  • Wei et al. (2022) - Chain-of-Thought Prompting
  • Kojima et al. (2022) - Zero-Shot Reasoners
  • Wang et al. (2022) - Self-Consistency
  • Bsharat et al. (2024) - Principled Instructions

These aren't random hacks. They're proven techniques that improve LLM responses by 20-50%.


Pricing

Fast: < 1 second per prompt
Cheap: ~$0.0001 per run
No AI costs: Just template wrapping (no LLM API calls)


Questions?

Does this call GPT-4 or Claude?
No. It wraps your prompt in a template. You paste the output into your LLM.

Will this work with [any LLM]?
Yes. Works with ChatGPT, Claude, Gemini, Llama, any LLM.

Can I use this commercially?
Yes. It's just template wrapping.

Which template is best?
Chain of Thought works for most cases. For critical decisions, use Self-Consistency.


Contact

LIAICHI MUSTAPHA
AI Engineer | Creator of n8nlearninghub.com

📧 mustaphaliaichi@gmail.com
🐙 GitHub: MuLIAICHI
💬 r/n8nLearningHub


Stop guessing. Start using templates that work. 🚀