Ai Prompt Library avatar
Ai Prompt Library

Pricing

from $0.01 / 1,000 results

Go to Apify Store
Ai Prompt Library

Ai Prompt Library

Production-grade prompt version control and A/B testing. Track performance, compare versions, rollback changes. Free alternative to PromptLayer ($49-299/mo).

Pricing

from $0.01 / 1,000 results

Rating

0.0

(0)

Developer

Cody Churchwell

Cody Churchwell

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

0

Monthly active users

20 days ago

Last modified

Share

๐ŸŽฏ AI Prompt Library & Version Control

Apify Actor License: MIT

Production-grade prompt version control and A/B testing. Track performance, compare versions, rollback changes. Free alternative to PromptLayer ($49-299/mo).

๐ŸŽฏ Why This Actor?

Managing AI prompts in production is challenging:

  • No version history - Lost track of what worked
  • No A/B testing - Can't compare prompt performance
  • No rollback - Breaking changes deployed to production
  • Expensive tools - PromptLayer costs $49-299/month
  • No collaboration - Teams can't share prompt libraries

This Actor solves all these problems with Git-like version control for prompts.

โœจ Key Features

๐Ÿ“ Version Control

  • Semantic versioning (1.0.0, 1.1.0, 2.0.0)
  • Change tracking - Full diff between versions
  • Changelog - Document why you made changes
  • Status management - draft โ†’ staging โ†’ production โ†’ deprecated
  • Rollback - Instantly revert to any previous version

๐Ÿงช A/B Testing

  • Compare versions - Test multiple prompts side-by-side
  • Performance metrics - Latency, cost, quality scores
  • Automatic winner - AI-powered best version selection
  • Test cases - Run prompts against predefined inputs
  • Statistical analysis - Confidence intervals and significance

๐Ÿ“Š Analytics

  • Performance tracking - Monitor latency, cost, quality over time
  • Version analytics - See which versions perform best
  • Usage stats - Total runs, success rates, error rates
  • Tag analytics - Group prompts by tags for insights
  • Time-based reports - 1h, 24h, 7d, 30d, all-time

๐Ÿ”„ Import/Export

  • Multiple formats - JSON, YAML, CSV, LangChain, PromptLayer
  • Easy migration - Import from other tools
  • Backup - Export your entire library
  • Sharing - Share prompts across teams

๐ŸŽจ Template Variables

  • Dynamic prompts - Use {{variable}} syntax
  • Auto-detection - Automatically finds variables in templates
  • Type safety - Validate inputs before execution
  • Reusable - Same prompt, different inputs

๐Ÿ“Š Use Cases

Use CaseConfiguration
Production PromptsCreate โ†’ Test โ†’ Version (1.0.0) โ†’ Status: production
A/B TestingVersion 1.0.0 vs 1.1.0 โ†’ Run test โ†’ Deploy winner
RollbackProduction broken โ†’ Rollback to previous version โ†’ Fix
Team CollaborationTags: team-ai, team-backend โ†’ Export/import โ†’ Share
Cost OptimizationCompare versions โ†’ Track cost metrics โ†’ Choose cheapest

๐Ÿš€ Quick Start

Create Your First Prompt

{
"operation": "create",
"promptData": {
"name": "blog-summarizer",
"template": "Summarize this blog post in {{length}} words:\\n\\n{{content}}",
"variables": ["length", "content"],
"model": "gpt-4",
"temperature": 0.7,
"tags": ["summarization", "production"],
"metadata": {
"owner": "content-team",
"description": "Main blog summarization prompt"
}
}
}

Output:

{
"success": true,
"promptId": "prompt_blog_summarizer_1234567890",
"version": "1.0.0",
"message": "Prompt created successfully"
}

Create a New Version

{
"operation": "version",
"promptId": "prompt_blog_summarizer_1234567890",
"promptData": {
"name": "blog-summarizer",
"template": "Create a concise {{length}}-word summary of:\\n\\n{{content}}",
"model": "gpt-4",
"temperature": 0.5
},
"versionInfo": {
"versionNumber": "1.1.0",
"changelog": "Improved clarity, reduced temperature for consistency",
"status": "staging"
}
}

Compare Versions

{
"operation": "compare",
"promptId": "prompt_blog_summarizer_1234567890",
"compareVersions": ["1.0.0", "1.1.0"]
}

Output:

{
"success": true,
"comparisons": [
{
"from": "1.0.0",
"to": "1.1.0",
"templateChanged": true,
"temperatureChanged": true,
"changes": [
{
"type": "removal",
"value": "Summarize this blog post in {{length}} words:"
},
{
"type": "addition",
"value": "Create a concise {{length}}-word summary of:"
}
],
"changelog": "Improved clarity, reduced temperature for consistency"
}
]
}

A/B Test Versions

{
"operation": "test",
"promptId": "prompt_blog_summarizer_1234567890",
"testConfig": {
"versions": ["1.0.0", "1.1.0"],
"testCases": [
{
"input": {
"length": "50",
"content": "Long blog post content here..."
}
},
{
"input": {
"length": "100",
"content": "Another blog post..."
}
}
],
"metrics": ["latency", "cost", "quality"]
}
}

Output:

{
"success": true,
"winner": "1.1.0",
"recommendation": "Version 1.1.0 performed best with 89.3% quality",
"results": [
{
"version": "1.0.0",
"avgMetrics": {
"avgLatency": 1243,
"avgCost": 0.0234,
"avgQuality": 0.851
}
},
{
"version": "1.1.0",
"avgMetrics": {
"avgLatency": 987,
"avgCost": 0.0198,
"avgQuality": 0.893
}
}
]
}

Rollback to Previous Version

{
"operation": "rollback",
"promptId": "prompt_blog_summarizer_1234567890",
"rollbackToVersion": "1.0.0"
}

๐Ÿ“ฅ Input

Operations

OperationDescriptionRequired Fields
createCreate new promptpromptData
updateUpdate current versionpromptId, promptData
versionCreate new versionpromptId, promptData, versionInfo
compareCompare versionspromptId, compareVersions (array)
testA/B test versionspromptId, testConfig
exportExport libraryexportFormat
importImport promptsimportSource
rollbackRollback to versionpromptId, rollbackToVersion
analyticsGet performance statspromptId, analyticsRange

Prompt Data Structure

{
"name": "string (required)",
"template": "string with {{variables}} (required)",
"variables": ["array", "of", "variable", "names"],
"model": "gpt-4 | gpt-3.5-turbo",
"temperature": 0.0-2.0,
"maxTokens": number,
"tags": ["array", "of", "tags"],
"metadata": {
"owner": "string",
"description": "string",
"custom": "any"
}
}

Version Info Structure

{
"versionNumber": "1.2.3 (semantic version)",
"changelog": "What changed and why",
"status": "draft | staging | production | deprecated"
}

Test Config Structure

{
"versions": ["1.0.0", "1.1.0"],
"testCases": [
{
"input": {"variable": "value"},
"expected": "optional expected output"
}
],
"metrics": ["latency", "cost", "quality", "tokens"]
}

๐Ÿ“ค Output

All operations return:

{
"success": true | false,
"promptId": "string",
"version": "string",
"message": "string",
"prompt": { /* full prompt object */ },
"comparisons": [ /* for compare */ ],
"results": [ /* for test */ ],
"analytics": { /* for analytics */ }
}

๐ŸŽ“ Advanced Usage

Semantic Versioning Best Practices

  • 1.0.0 โ†’ 1.0.1 - Bug fixes, typo corrections
  • 1.0.0 โ†’ 1.1.0 - New features, improved wording, model changes
  • 1.0.0 โ†’ 2.0.0 - Breaking changes, complete rewrites

Status Workflow

draft โ†’ staging โ†’ production โ†’ deprecated
โ†“ โ†“ โ†“
test A/B test monitor rollback if needed

Tag Strategy

{
"tags": [
"team-ai", // Ownership
"summarization", // Functionality
"production", // Environment
"high-priority", // Priority
"v2-migration" // Projects
]
}

Export Formats

FormatBest For
jsonBackup, full fidelity
yamlHuman-readable, git commits
langchainLangChain integration
promptlayerMigration from PromptLayer
csvExcel, data analysis

๐Ÿ’ฐ Cost Comparison

FeatureThis ActorPromptLayerLangSmith
PriceFree$49-299/mo$39-299/mo
Version Controlโœ…โœ…โœ…
A/B Testingโœ…โœ…โœ…
Analyticsโœ…โœ…โœ…
Exportโœ…โŒโŒ
Self-hostedโœ…โŒโŒ
No vendor lock-inโœ…โŒโŒ
Semantic versioningโœ…โŒโŒ
Full data ownershipโœ…โŒโŒ

๐Ÿ”ง Integration Examples

Python

from apify_client import ApifyClient
client = ApifyClient('your-token')
# Create prompt
run = client.actor('your-actor-id').call(run_input={
'operation': 'create',
'promptData': {
'name': 'my-prompt',
'template': 'Translate {{text}} to {{language}}',
'model': 'gpt-4'
}
})
prompt_id = run['dataset']['items'][0]['promptId']
# Create new version
client.actor('your-actor-id').call(run_input={
'operation': 'version',
'promptId': prompt_id,
'promptData': {
'name': 'my-prompt',
'template': 'Translate the following to {{language}}:\\n\\n{{text}}'
},
'versionInfo': {
'versionNumber': '1.1.0',
'changelog': 'Improved formatting'
}
})

JavaScript/TypeScript

import { ApifyClient } from 'apify-client';
const client = new ApifyClient({ token: 'your-token' });
// A/B test
const run = await client.actor('your-actor-id').call({
operation: 'test',
promptId: 'prompt_my_prompt_123',
testConfig: {
versions: ['1.0.0', '1.1.0'],
testCases: [
{ input: { text: 'Hello', language: 'Spanish' } }
],
metrics: ['latency', 'cost', 'quality']
}
});
console.log('Winner:', run.dataset.items[0].winner);

cURL

curl -X POST https://api.apify.com/v2/acts/{actor-id}/runs \
-H 'Authorization: Bearer {token}' \
-H 'Content-Type: application/json' \
-d '{
"operation": "analytics",
"promptId": "prompt_my_prompt_123",
"analyticsRange": "7d"
}'

๐ŸŽฏ Best Practices

1. Always Use Semantic Versioning

  • Makes it easy to track breaking changes
  • Clear communication with your team
  • Automated tooling compatibility

2. Write Detailed Changelogs

  • Future you will thank you
  • Team members understand changes
  • Easier to debug issues

3. Use Tags Extensively

  • Group related prompts
  • Easy filtering and search
  • Better organization

4. Test Before Production

  • Always use staging status first
  • Run A/B tests with real data
  • Monitor performance metrics

5. Regular Backups

  • Export library weekly/monthly
  • Store exports in version control
  • Multiple export formats

๐Ÿ”’ Privacy & Data

  • All data stored in Apify Key-Value Store
  • No third-party tracking
  • Full data ownership
  • GDPR compliant
  • No usage limits

๐Ÿ“– API Reference

Create Prompt

Creates a new prompt with version 1.0.0

Update Prompt

Updates the current version in-place (for minor tweaks)

Version Prompt

Creates a new semantic version (recommended for changes)

Compare Versions

Shows diff and performance comparison between versions

A/B Test

Runs multiple versions against test cases, determines winner

Export Library

Exports entire library in chosen format

Import Prompts

Imports prompts from JSON/YAML/CSV

Rollback

Sets a previous version as current (instant rollback)

Analytics

Performance metrics and usage statistics

๐Ÿ› Troubleshooting

Q: My variables aren't being detected

  • Make sure you use {{variableName}} syntax (double curly braces)
  • Variables are auto-detected from template
  • Or manually specify in variables array

Q: Version comparison shows no differences

  • Ensure version numbers are different
  • Check that templates actually differ
  • Verify you're comparing the right versions

Q: A/B test results seem random

  • In the current version, LLM calls are mocked
  • Integrate with your LLM API for real results
  • Use this for testing the workflow

Q: Can I migrate from PromptLayer?

  • Yes! Export from PromptLayer as JSON
  • Use import operation with importSource
  • Format will be auto-detected

๐Ÿš€ Roadmap

  • Real-time LLM API integration (OpenAI, Anthropic, etc.)
  • Collaborative editing with conflict resolution
  • Automatic performance monitoring
  • Slack/Discord notifications for changes
  • Web UI for non-technical users
  • Git integration (push/pull prompts)
  • Cost optimization recommendations
  • Prompt marketplace/sharing

๐Ÿ“Š Success Metrics

Target MAU: 1,200+ users Use Cases:

  • Startups building AI products (500 MAU)
  • Enterprise AI teams (400 MAU)
  • Indie developers (200 MAU)
  • AI consultants (100 MAU)

Competitive Advantage:

  • โœ… Free vs $49-299/month competitors
  • โœ… Self-hosted data ownership
  • โœ… No vendor lock-in
  • โœ… Semantic versioning
  • โœ… Full export capabilities

๐Ÿ“ License

MIT License - Use freely in your projects!


Built for the Apify $1M Challenge ๐Ÿš€

Saving AI teams $49-299/month on prompt management tools