Hugging Face Master avatar
Hugging Face Master

Pricing

from $0.01 / 1,000 results

Go to Apify Store
Hugging Face Master

Hugging Face Master

Unified Apify actor for Hugging Face Inference API access 200K+ AI models for text image audio processing Text Generation LLMs Llama Summarization Condense documents Translation 100+ languages Sentiment Analysis Image Generation Stable Diffusion Speech transcription Semantic search QA classification

Pricing

from $0.01 / 1,000 results

Rating

0.0

(0)

Developer

John Rippy

John Rippy

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

1

Monthly active users

6 days ago

Last modified

Share

Hugging Face Master - Unified AI Inference for 200K+ Models

Access the entire Hugging Face model ecosystem through a single unified API. Text generation with Llama/Mistral, image generation with Stable Diffusion, speech-to-text with Whisper, and more. One actor to rule them all - no ML infrastructure required. BYOK with your Hugging Face API token.

Features

  • Text Generation - LLMs like Llama 2, Mistral, Falcon, GPT-2
  • Summarization - Condense documents with BART, T5, Pegasus
  • Translation - 100+ language pairs with Helsinki-NLP models
  • Sentiment Analysis - Classify text sentiment and emotions
  • Image Generation - Stable Diffusion XL, SDXL Turbo
  • Image Captioning - Generate descriptions with BLIP
  • Image Classification - Identify objects with ViT
  • Object Detection - Locate objects with DETR
  • Speech-to-Text - Transcribe audio with Whisper
  • Text-to-Speech - Generate natural speech
  • Embeddings - Semantic search vectors
  • Zero-Shot Classification - Classify without training
  • Question Answering - Extract answers from context
  • Fill-Mask - Complete sentences with BERT
  • Demo Mode - Test with sample data before going live

Who Should Use This Actor?

AI Application Developers

Access 200K+ models without managing infrastructure. Test different models quickly. Scale from prototype to production.

Content Teams

Generate, summarize, and translate content at scale. Automate repetitive writing tasks. Maintain quality with top models.

Data Scientists

Rapid prototyping without GPU setup. Compare models for your use case. Integrate into data pipelines.

E-commerce Teams

Generate product descriptions. Translate listings. Analyze customer sentiment. Caption product images.

Marketing Agencies

Multilingual content at scale. Sentiment monitoring. AI-powered copywriting. Image generation for campaigns.

Research Teams

Access latest models instantly. Run experiments without infrastructure. Reproducible AI pipelines.

Quick Start

Demo Mode (Free Test)

{
"task": "text_generation",
"prompt": "Write a product description for wireless earbuds",
"demoMode": true
}

Text Generation (LLMs)

{
"task": "text_generation",
"apiToken": "hf_your_token_here",
"model": "mistralai/Mistral-7B-Instruct-v0.1",
"prompt": "Write a compelling product description for noise-canceling headphones",
"maxTokens": 256,
"temperature": 0.7,
"demoMode": false
}

Image Generation (Stable Diffusion)

{
"task": "image_generation",
"apiToken": "hf_your_token_here",
"model": "stabilityai/stable-diffusion-xl-base-1.0",
"prompt": "Professional product photo of sleek wireless earbuds, studio lighting",
"negativePrompt": "blurry, low quality",
"width": 1024,
"height": 1024,
"guidanceScale": 7.5,
"demoMode": false
}

Speech-to-Text (Whisper)

{
"task": "speech_to_text",
"apiToken": "hf_your_token_here",
"model": "openai/whisper-large-v3",
"audioUrl": "https://example.com/podcast-episode.mp3",
"demoMode": false
}

Summarization

{
"task": "summarization",
"apiToken": "hf_your_token_here",
"text": "Your long document text here...",
"maxTokens": 150,
"demoMode": false
}

Translation

{
"task": "translation",
"apiToken": "hf_your_token_here",
"text": "Hello, how are you today?",
"sourceLanguage": "en",
"targetLanguage": "es",
"demoMode": false
}

Zero-Shot Classification

{
"task": "zero_shot_classification",
"apiToken": "hf_your_token_here",
"text": "Apple reported record quarterly revenue of $123 billion",
"candidateLabels": "business,technology,sports,politics",
"demoMode": false
}

How to Get a Hugging Face API Token

  1. Create a free account at huggingface.co
  2. Go to Settings → Access Tokens
  3. Click "New token" with "read" permissions
  4. Copy the token (starts with hf_)
  5. Paste in the apiToken field

Note: Free tier includes limited inference. Pro subscription ($9/mo) removes limits.

Input Parameters

ParameterTypeDefaultDescription
taskstringrequiredTask to perform (see task list)
apiTokenstring-Your Hugging Face API token
modelstringtask defaultSpecific model to use
promptstring-Input prompt for generation tasks
textstring-Input text for processing tasks
contextstring-Context for Q&A tasks
questionstring-Question for Q&A tasks
imageUrlstring-Image URL for vision tasks
audioUrlstring-Audio URL for speech tasks
sourceLanguagestring"en"Source language for translation
targetLanguagestring"es"Target language for translation
candidateLabelsstring-Comma-separated labels for classification
negativePromptstring-What to avoid in image generation
maxTokensnumber256Max tokens to generate
temperaturenumber0.7Randomness (0.0-1.0)
topPnumber0.9Nucleus sampling threshold
widthnumber1024Image width
heightnumber1024Image height
guidanceScalenumber7.5CFG scale for image generation
numInferenceStepsnumber50Diffusion steps
waitForModelbooleantrueWait for model to load
useCachebooleantrueUse cached results
webhookUrlstring-Webhook URL for results
demoModebooleantrueReturn sample data

Available Tasks

TaskDescriptionDefault Model
text_generationGenerate text from promptMistral-7B-Instruct
summarizationSummarize long textBART-large-CNN
translationTranslate between languagesHelsinki-NLP OPUS
sentiment_analysisAnalyze sentimentDistilBERT-SST2
text_classificationClassify textDistilBERT-SST2
question_answeringAnswer questions from contextRoBERTa-SQuAD2
fill_maskComplete masked sentencesBERT-base
embeddingsGenerate vector embeddingsall-MiniLM-L6-v2
zero_shot_classificationClassify without trainingBART-large-MNLI
image_generationGenerate images from textStable Diffusion XL
image_to_textCaption imagesBLIP-large
image_classificationClassify imagesViT-base
object_detectionDetect objects in imagesDETR-ResNet-50
speech_to_textTranscribe audioWhisper-large-v3
text_to_speechGenerate speechMMS-TTS
audio_classificationClassify audioAST-AudioSet

Output Format

Text Generation

{
"success": true,
"model": "mistralai/Mistral-7B-Instruct-v0.1",
"generated_text": "The Sony WH-1000XM5 delivers an immersive audio experience...",
"parameters": {
"max_tokens": 256,
"temperature": 0.7
}
}

Image Generation

{
"success": true,
"model": "stabilityai/stable-diffusion-xl-base-1.0",
"imageBase64": "iVBORw0KGgoAAAANSUhEUgAAA...",
"mimeType": "image/png",
"prompt": "Professional product photo..."
}

Speech-to-Text

{
"success": true,
"model": "openai/whisper-large-v3",
"transcription": "Hello and welcome to our presentation today...",
"audioUrl": "https://example.com/audio.mp3"
}

Sentiment Analysis

{
"success": true,
"model": "distilbert-base-uncased-finetuned-sst-2-english",
"sentiment": [
{"label": "POSITIVE", "score": 0.89},
{"label": "NEGATIVE", "score": 0.11}
]
}

Pricing (Pay-Per-Event)

EventDescriptionPrice
inference_completedPer successful inference$0.01

Example costs:

  • 100 text generations: 100 × $0.01 = $1.00
  • 50 image generations: 50 × $0.01 = $0.50
  • 200 sentiment analyses: 200 × $0.01 = $2.00
  • Demo mode: $0.00

Note: Hugging Face may charge for Pro model inference separately

Cost Comparison

ToolMonthly CostThis Actor (1000 calls)
Replicate~$50/mo~$10/mo
Banana.dev~$40/mo~$10/mo
RunPod~$30/mo~$10/mo

Common Scenarios

Scenario 1: Product Content Generation

{
"task": "text_generation",
"apiToken": "hf_your_token",
"model": "mistralai/Mistral-7B-Instruct-v0.1",
"prompt": "Write a compelling 100-word product description for: Premium wireless noise-canceling headphones with 30-hour battery life",
"maxTokens": 150,
"temperature": 0.8,
"demoMode": false
}

Scenario 2: Multilingual Support

{
"task": "translation",
"apiToken": "hf_your_token",
"text": "Our product ships worldwide with free returns",
"sourceLanguage": "en",
"targetLanguage": "de",
"demoMode": false
}

Scenario 3: Customer Feedback Analysis

{
"task": "sentiment_analysis",
"apiToken": "hf_your_token",
"text": "This product exceeded my expectations! Great quality and fast shipping.",
"demoMode": false
}

Scenario 4: Podcast Transcription

{
"task": "speech_to_text",
"apiToken": "hf_your_token",
"audioUrl": "https://example.com/podcast.mp3",
"model": "openai/whisper-large-v3",
"webhookUrl": "https://hooks.zapier.com/...",
"demoMode": false
}

Webhook & Automation Integration

Zapier / Make.com / n8n

  1. Create a webhook trigger
  2. Copy the URL to webhookUrl
  3. Process AI results in your workflow

Popular automations:

  • Generated content -> CMS publishing
  • Sentiment analysis -> Support ticket routing
  • Transcriptions -> Meeting notes in Notion
  • Image generation -> Social media scheduling

Hugging Face AI Suite

ActorBest For
Hugging Face MasterAll-in-one unified access
Hugging Face TextText-only processing (smaller)
Hugging Face ImageImage-only processing (smaller)
Hugging Face AudioAudio-only processing (smaller)
Hugging Face HubModel/dataset discovery

FAQ

Q: Which models are free to use?

A: Most models on Hugging Face are free with a token. Some popular models (like Llama 2) require accepting terms. Pro-only models need a $9/mo subscription.

Q: How do I choose the right model?

A: Start with defaults - they're optimized for quality/speed. Check Hugging Face leaderboards for task-specific recommendations.

Q: What's the max input/output size?

A: Varies by model. Most text models support 4K-8K tokens. Whisper supports audio up to 30 seconds (chunk longer files).

Q: Are results cached?

A: Yes, by default. Set useCache: false for fresh results each time.

Q: Can I use custom/fine-tuned models?

A: Yes! Specify any public model ID in the model parameter. Private models work with the right API token.

Common Problems & Solutions

"Model is loading"

  • Large models need time to load. Set waitForModel: true (default)
  • For faster responses, use smaller models or models with "always-on" endpoints

"Rate limit exceeded"

  • Free tier has limits. Wait a few minutes or upgrade to HF Pro
  • Reduce concurrent requests

"Invalid API token"

  • Check token starts with hf_
  • Ensure token has "read" permission
  • Try generating a new token

"Demo data showing"

  • Set demoMode: false
  • Provide your Hugging Face API token

Built by John Rippy | Actor Arsenal