Face Recognition Api avatar
Face Recognition Api

Pricing

Pay per event

Go to Apify Store
Face Recognition Api

Face Recognition Api

a comprehensive Apify Actor that provides advanced facial analysis and identification capabilities using DeepFace and state-of-the-art computer vision models.

Pricing

Pay per event

Rating

0.0

(0)

Developer

christopher athans crow

christopher athans crow

Maintained by Community

Actor stats

0

Bookmarked

3

Total users

1

Monthly active users

2 days ago

Last modified

Share

Face Recognition API Actor

Apify Actor Python DeepFace

Advanced facial analysis and identification Actor for the Apify platform. Process images and videos to detect, analyze, and match human faces using state-of-the-art computer vision algorithms powered by DeepFace.

🎯 Key Features

  • Real-time Face Detection: Achieves high accuracy rates (up to 98.4%) in identifying faces
  • Facial Landmark Identification: Provides detailed feature mapping for each detected face
  • Emotion & Demographic Analysis: Estimates age, gender, and emotional states
  • Batch Processing: Handles multiple images or video files simultaneously
  • Custom Database Face Matching: Match faces against user-defined databases (1:N matching)
  • Versatile Media Extraction: Supports various image formats and video processing
  • Privacy Compliant: Secure data handling with privacy best practices

🚀 Quick Start

Basic Usage

{
"inputType": "imageUrls",
"imageUrls": [
"https://example.com/photo1.jpg",
"https://example.com/photo2.jpg"
],
"performFaceDetection": true,
"performFacialAnalysis": true,
"detectorBackend": "opencv",
"recognitionModel": "Facenet512"
}

Video Processing

{
"inputType": "videoUrls",
"videoUrls": ["https://example.com/video.mp4"],
"videoFrameSamplingRate": 1,
"performFaceDetection": true,
"performFacialAnalysis": true
}

Database Matching

{
"inputType": "imageUrls",
"imageUrls": ["https://example.com/target.jpg"],
"performDatabaseMatching": true,
"databaseKeyValueStoreName": "my-face-database",
"databaseSimilarityThreshold": 0.6
}

📊 Input Parameters

Input Source

ParameterTypeDescriptionDefault
inputTypestringType of input: imageUrls, videoUrls, or datasetIdimageUrls
imageUrlsarrayList of image URLs to process[]
videoUrlsarrayList of video URLs to process[]
datasetIdstringApify Dataset ID containing images-

Detection & Recognition

ParameterTypeDescriptionDefault
detectorBackendstringFace detector: opencv (fast), retinaface (accurate), mtcnn, ssd, dlibopencv
recognitionModelstringRecognition model: Facenet512, VGG-Face, ArcFace, etc.Facenet512
detectionConfidencenumberMinimum confidence threshold (0.0-1.0)0.9

Feature Flags

ParameterTypeDescriptionDefault
performFaceDetectionbooleanEnable face detectiontrue
performFacialAnalysisbooleanAnalyze age, gender, emotion, racetrue
performLandmarkDetectionbooleanDetect facial landmarksfalse
performDatabaseMatchingbooleanMatch against custom databasefalse

Database Settings

ParameterTypeDescriptionDefault
databaseKeyValueStoreNamestringName of Key-Value Store for face databaseface-database
databaseSimilarityThresholdnumberSimilarity threshold for matching (0.0-1.0)0.6

Processing Options

ParameterTypeDescriptionDefault
videoFrameSamplingRateintegerFrames per second to extract from videos1
maxFacesPerImageintegerMaximum faces to process per image (0 = unlimited)0
batchSizeintegerNumber of images to process in parallel5
maxResultsintegerMaximum number of images to process (0 = unlimited)0
saveAnnotatedImagesbooleanSave images with bounding boxesfalse
saveFaceCropsbooleanSave individual face cropsfalse

📤 Output Format

Image Result

{
"imageUrl": "https://example.com/photo.jpg",
"timestamp": "2024-11-19T21:00:00.000Z",
"processingTimeSeconds": 2.34,
"facesDetected": 2,
"faces": [
{
"faceId": 0,
"boundingBox": {
"x": 100,
"y": 150,
"w": 120,
"h": 140
},
"detectionConfidence": 0.99,
"analysis": {
"age": 28,
"gender": {
"class": "Woman",
"confidence": 98.5
},
"emotion": {
"class": "happy",
"confidence": 87.3
},
"race": {
"class": "white",
"confidence": 76.2
}
},
"databaseMatches": [
{
"databaseId": "person_123",
"name": "John Doe",
"similarity": 0.89,
"distance": 0.11
}
]
}
]
}

Summary Statistics

The Actor also saves a summary with aggregated statistics:

{
"totalImagesProcessed": 10,
"totalFacesDetected": 23,
"averageFacesPerImage": 2.3,
"totalProcessingTimeSeconds": 45.6,
"averageProcessingTimePerImage": 4.56,
"emotionDistribution": {
"happy": 12,
"neutral": 8,
"sad": 3
},
"genderDistribution": {
"Woman": 14,
"Man": 9
},
"ageStatistics": {
"averageAge": 32.4,
"minAge": 18,
"maxAge": 65
}
}

🎯 Use Cases

Security & Access Control

  • Building access control systems
  • Identity verification for secure areas
  • Employee attendance tracking
  • Surveillance footage analysis

Marketing & Demographics

  • Analyze customer demographics from store cameras
  • Measure audience engagement at events
  • A/B testing with facial emotion detection
  • Target audience analysis from visual content

Social Media & Content Moderation

  • Automated face blurring for privacy
  • Content categorization by demographics
  • Finding specific individuals in large photo collections
  • Duplicate face detection

Law Enforcement

  • Suspect identification from surveillance
  • Missing person searches
  • Witness identification
  • Evidence processing

🗄️ Custom Face Database

Creating a Database

To match faces against a custom database, you'll need to populate a Key-Value Store with face embeddings:

  1. Option A: Use this Actor to build a database

    • Process reference images with performDatabaseMatching: false
    • Extract embeddings from results
    • Store in Key-Value Store
  2. Option B: Use the Apify API

    const store = await Actor.openKeyValueStore('my-face-database');
    await store.setValue('face_john_doe', {
    id: 'john_doe',
    name: 'John Doe',
    embedding: [...], // 128/512-dimensional vector
    metadata: { department: 'Engineering' }
    });

Database Format

Each face entry should follow this structure:

{
"id": "unique_id",
"name": "Person Name",
"embedding": [0.123, -0.456, ...],
"embedding_dimension": 512,
"metadata": {
"custom_field": "value"
}
}

⚙️ Model Selection

Detector Backends

BackendSpeedAccuracyBest For
opencv⭐⭐⭐⭐⭐⭐⭐⭐Real-time processing, large batches
retinaface⭐⭐⭐⭐⭐⭐⭐High accuracy requirements
mtcnn⭐⭐⭐⭐⭐⭐⭐Challenging lighting/angles
ssd⭐⭐⭐⭐⭐⭐⭐⭐Balanced performance
dlib⭐⭐⭐⭐⭐⭐⭐General purpose

Recognition Models

ModelAccuracySpeedDimensionBest For
Facenet512⭐⭐⭐⭐⭐⭐⭐⭐512Recommended - Best overall accuracy
ArcFace⭐⭐⭐⭐⭐⭐⭐⭐512High accuracy matching
VGG-Face⭐⭐⭐⭐⭐⭐2622Research applications
Facenet⭐⭐⭐⭐⭐⭐⭐⭐128Fast processing
OpenFace⭐⭐⭐⭐⭐⭐⭐⭐128Speed-critical applications

🔒 Privacy & Compliance

Important Considerations

⚠️ WARNING: This Actor processes biometric data. Ensure compliance with applicable privacy regulations:

  • GDPR (Europe): Obtain explicit consent, implement data minimization
  • CCPA (California): Provide opt-out mechanisms, disclose data usage
  • BIPA (Illinois): Get written consent before collecting biometric data
  • Other jurisdictions: Check local laws

Best Practices

  1. Obtain Consent: Always get explicit permission before processing faces
  2. Data Minimization: Only collect/process data necessary for your use case
  3. Secure Storage: Use Apify's encrypted storage for sensitive data
  4. Retention Policies: Define and enforce data retention limits
  5. Access Controls: Restrict access to face data and databases
  6. Anonymization: Consider anonymizing results when possible

🔧 Integration Examples

JavaScript/Node.js

const { ApifyClient } = require('apify-client');
const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });
const run = await client.actor('YOUR_ACTOR_ID').call({
inputType: 'imageUrls',
imageUrls: ['https://example.com/photo.jpg'],
performFacialAnalysis: true
});
const { items } = await client.dataset(run.defaultDatasetId).listItems();
console.log(items);

Python

from apify_client import ApifyClient
client = ApifyClient('YOUR_API_TOKEN')
run = client.actor('YOUR_ACTOR_ID').call(run_input={
'inputType': 'imageUrls',
'imageUrls': ['https://example.com/photo.jpg'],
'performFacialAnalysis': True
})
items = client.dataset(run['defaultDatasetId']).list_items().items
print(items)

cURL

curl -X POST https://api.apify.com/v2/acts/YOUR_ACTOR_ID/runs \
-H "Authorization: Bearer YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"inputType": "imageUrls",
"imageUrls": ["https://example.com/photo.jpg"],
"performFacialAnalysis": true
}'

📈 Performance

Processing Times (Approximate)

ConfigurationImages/minuteNotes
OpenCV + Facenet51230-40Recommended for most use cases
RetinaFace + ArcFace10-15Highest accuracy
OpenCV + OpenFace50-60Speed-optimized

Resource Requirements

  • Memory: Minimum 2048 MB (4096 MB recommended for large batches)
  • Compute: Scales with image resolution and number of faces

🐛 Troubleshooting

No Faces Detected

  • Lower detectionConfidence threshold
  • Try different detectorBackend (e.g., retinaface for difficult images)
  • Ensure images are clear and faces are visible
  • Check image resolution (very small faces may not be detected)

Poor Recognition Accuracy

  • Use Facenet512 or ArcFace models for best accuracy
  • Ensure reference database has good quality images
  • Adjust databaseSimilarityThreshold (lower = more strict)
  • Verify face alignment and image quality

Slow Processing

  • Use opencv detector for faster processing
  • Reduce batchSize if running out of memory
  • Use OpenFace model for faster recognition
  • Process videos at lower videoFrameSamplingRate

Memory Issues

  • Reduce batchSize
  • Set maxFacesPerImage to limit processing
  • Disable saveAnnotatedImages and saveFaceCrops
  • Process images in smaller batches

📚 Additional Resources

🤝 Support

For issues, questions, or feature requests:

📝 License

Apache-2.0


Built with ❤️ using Apify and DeepFace