Face Recognition Api
Pricing
Pay per event
Face Recognition Api
a comprehensive Apify Actor that provides advanced facial analysis and identification capabilities using DeepFace and state-of-the-art computer vision models.
Pricing
Pay per event
Rating
0.0
(0)
Developer

christopher athans crow
Actor stats
0
Bookmarked
3
Total users
1
Monthly active users
2 days ago
Last modified
Categories
Share
Face Recognition API Actor
Advanced facial analysis and identification Actor for the Apify platform. Process images and videos to detect, analyze, and match human faces using state-of-the-art computer vision algorithms powered by DeepFace.
🎯 Key Features
- Real-time Face Detection: Achieves high accuracy rates (up to 98.4%) in identifying faces
- Facial Landmark Identification: Provides detailed feature mapping for each detected face
- Emotion & Demographic Analysis: Estimates age, gender, and emotional states
- Batch Processing: Handles multiple images or video files simultaneously
- Custom Database Face Matching: Match faces against user-defined databases (1:N matching)
- Versatile Media Extraction: Supports various image formats and video processing
- Privacy Compliant: Secure data handling with privacy best practices
🚀 Quick Start
Basic Usage
{"inputType": "imageUrls","imageUrls": ["https://example.com/photo1.jpg","https://example.com/photo2.jpg"],"performFaceDetection": true,"performFacialAnalysis": true,"detectorBackend": "opencv","recognitionModel": "Facenet512"}
Video Processing
{"inputType": "videoUrls","videoUrls": ["https://example.com/video.mp4"],"videoFrameSamplingRate": 1,"performFaceDetection": true,"performFacialAnalysis": true}
Database Matching
{"inputType": "imageUrls","imageUrls": ["https://example.com/target.jpg"],"performDatabaseMatching": true,"databaseKeyValueStoreName": "my-face-database","databaseSimilarityThreshold": 0.6}
📊 Input Parameters
Input Source
| Parameter | Type | Description | Default |
|---|---|---|---|
inputType | string | Type of input: imageUrls, videoUrls, or datasetId | imageUrls |
imageUrls | array | List of image URLs to process | [] |
videoUrls | array | List of video URLs to process | [] |
datasetId | string | Apify Dataset ID containing images | - |
Detection & Recognition
| Parameter | Type | Description | Default |
|---|---|---|---|
detectorBackend | string | Face detector: opencv (fast), retinaface (accurate), mtcnn, ssd, dlib | opencv |
recognitionModel | string | Recognition model: Facenet512, VGG-Face, ArcFace, etc. | Facenet512 |
detectionConfidence | number | Minimum confidence threshold (0.0-1.0) | 0.9 |
Feature Flags
| Parameter | Type | Description | Default |
|---|---|---|---|
performFaceDetection | boolean | Enable face detection | true |
performFacialAnalysis | boolean | Analyze age, gender, emotion, race | true |
performLandmarkDetection | boolean | Detect facial landmarks | false |
performDatabaseMatching | boolean | Match against custom database | false |
Database Settings
| Parameter | Type | Description | Default |
|---|---|---|---|
databaseKeyValueStoreName | string | Name of Key-Value Store for face database | face-database |
databaseSimilarityThreshold | number | Similarity threshold for matching (0.0-1.0) | 0.6 |
Processing Options
| Parameter | Type | Description | Default |
|---|---|---|---|
videoFrameSamplingRate | integer | Frames per second to extract from videos | 1 |
maxFacesPerImage | integer | Maximum faces to process per image (0 = unlimited) | 0 |
batchSize | integer | Number of images to process in parallel | 5 |
maxResults | integer | Maximum number of images to process (0 = unlimited) | 0 |
saveAnnotatedImages | boolean | Save images with bounding boxes | false |
saveFaceCrops | boolean | Save individual face crops | false |
📤 Output Format
Image Result
{"imageUrl": "https://example.com/photo.jpg","timestamp": "2024-11-19T21:00:00.000Z","processingTimeSeconds": 2.34,"facesDetected": 2,"faces": [{"faceId": 0,"boundingBox": {"x": 100,"y": 150,"w": 120,"h": 140},"detectionConfidence": 0.99,"analysis": {"age": 28,"gender": {"class": "Woman","confidence": 98.5},"emotion": {"class": "happy","confidence": 87.3},"race": {"class": "white","confidence": 76.2}},"databaseMatches": [{"databaseId": "person_123","name": "John Doe","similarity": 0.89,"distance": 0.11}]}]}
Summary Statistics
The Actor also saves a summary with aggregated statistics:
{"totalImagesProcessed": 10,"totalFacesDetected": 23,"averageFacesPerImage": 2.3,"totalProcessingTimeSeconds": 45.6,"averageProcessingTimePerImage": 4.56,"emotionDistribution": {"happy": 12,"neutral": 8,"sad": 3},"genderDistribution": {"Woman": 14,"Man": 9},"ageStatistics": {"averageAge": 32.4,"minAge": 18,"maxAge": 65}}
🎯 Use Cases
Security & Access Control
- Building access control systems
- Identity verification for secure areas
- Employee attendance tracking
- Surveillance footage analysis
Marketing & Demographics
- Analyze customer demographics from store cameras
- Measure audience engagement at events
- A/B testing with facial emotion detection
- Target audience analysis from visual content
Social Media & Content Moderation
- Automated face blurring for privacy
- Content categorization by demographics
- Finding specific individuals in large photo collections
- Duplicate face detection
Law Enforcement
- Suspect identification from surveillance
- Missing person searches
- Witness identification
- Evidence processing
🗄️ Custom Face Database
Creating a Database
To match faces against a custom database, you'll need to populate a Key-Value Store with face embeddings:
-
Option A: Use this Actor to build a database
- Process reference images with
performDatabaseMatching: false - Extract embeddings from results
- Store in Key-Value Store
- Process reference images with
-
Option B: Use the Apify API
const store = await Actor.openKeyValueStore('my-face-database');await store.setValue('face_john_doe', {id: 'john_doe',name: 'John Doe',embedding: [...], // 128/512-dimensional vectormetadata: { department: 'Engineering' }});
Database Format
Each face entry should follow this structure:
{"id": "unique_id","name": "Person Name","embedding": [0.123, -0.456, ...],"embedding_dimension": 512,"metadata": {"custom_field": "value"}}
⚙️ Model Selection
Detector Backends
| Backend | Speed | Accuracy | Best For |
|---|---|---|---|
opencv | ⭐⭐⭐⭐⭐ | ⭐⭐⭐ | Real-time processing, large batches |
retinaface | ⭐⭐ | ⭐⭐⭐⭐⭐ | High accuracy requirements |
mtcnn | ⭐⭐ | ⭐⭐⭐⭐⭐ | Challenging lighting/angles |
ssd | ⭐⭐⭐⭐ | ⭐⭐⭐⭐ | Balanced performance |
dlib | ⭐⭐⭐ | ⭐⭐⭐⭐ | General purpose |
Recognition Models
| Model | Accuracy | Speed | Dimension | Best For |
|---|---|---|---|---|
Facenet512 | ⭐⭐⭐⭐⭐ | ⭐⭐⭐ | 512 | Recommended - Best overall accuracy |
ArcFace | ⭐⭐⭐⭐⭐ | ⭐⭐⭐ | 512 | High accuracy matching |
VGG-Face | ⭐⭐⭐⭐ | ⭐⭐ | 2622 | Research applications |
Facenet | ⭐⭐⭐⭐ | ⭐⭐⭐⭐ | 128 | Fast processing |
OpenFace | ⭐⭐⭐ | ⭐⭐⭐⭐⭐ | 128 | Speed-critical applications |
🔒 Privacy & Compliance
Important Considerations
⚠️ WARNING: This Actor processes biometric data. Ensure compliance with applicable privacy regulations:
- GDPR (Europe): Obtain explicit consent, implement data minimization
- CCPA (California): Provide opt-out mechanisms, disclose data usage
- BIPA (Illinois): Get written consent before collecting biometric data
- Other jurisdictions: Check local laws
Best Practices
- Obtain Consent: Always get explicit permission before processing faces
- Data Minimization: Only collect/process data necessary for your use case
- Secure Storage: Use Apify's encrypted storage for sensitive data
- Retention Policies: Define and enforce data retention limits
- Access Controls: Restrict access to face data and databases
- Anonymization: Consider anonymizing results when possible
🔧 Integration Examples
JavaScript/Node.js
const { ApifyClient } = require('apify-client');const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });const run = await client.actor('YOUR_ACTOR_ID').call({inputType: 'imageUrls',imageUrls: ['https://example.com/photo.jpg'],performFacialAnalysis: true});const { items } = await client.dataset(run.defaultDatasetId).listItems();console.log(items);
Python
from apify_client import ApifyClientclient = ApifyClient('YOUR_API_TOKEN')run = client.actor('YOUR_ACTOR_ID').call(run_input={'inputType': 'imageUrls','imageUrls': ['https://example.com/photo.jpg'],'performFacialAnalysis': True})items = client.dataset(run['defaultDatasetId']).list_items().itemsprint(items)
cURL
curl -X POST https://api.apify.com/v2/acts/YOUR_ACTOR_ID/runs \-H "Authorization: Bearer YOUR_API_TOKEN" \-H "Content-Type: application/json" \-d '{"inputType": "imageUrls","imageUrls": ["https://example.com/photo.jpg"],"performFacialAnalysis": true}'
📈 Performance
Processing Times (Approximate)
| Configuration | Images/minute | Notes |
|---|---|---|
| OpenCV + Facenet512 | 30-40 | Recommended for most use cases |
| RetinaFace + ArcFace | 10-15 | Highest accuracy |
| OpenCV + OpenFace | 50-60 | Speed-optimized |
Resource Requirements
- Memory: Minimum 2048 MB (4096 MB recommended for large batches)
- Compute: Scales with image resolution and number of faces
🐛 Troubleshooting
No Faces Detected
- Lower
detectionConfidencethreshold - Try different
detectorBackend(e.g.,retinafacefor difficult images) - Ensure images are clear and faces are visible
- Check image resolution (very small faces may not be detected)
Poor Recognition Accuracy
- Use
Facenet512orArcFacemodels for best accuracy - Ensure reference database has good quality images
- Adjust
databaseSimilarityThreshold(lower = more strict) - Verify face alignment and image quality
Slow Processing
- Use
opencvdetector for faster processing - Reduce
batchSizeif running out of memory - Use
OpenFacemodel for faster recognition - Process videos at lower
videoFrameSamplingRate
Memory Issues
- Reduce
batchSize - Set
maxFacesPerImageto limit processing - Disable
saveAnnotatedImagesandsaveFaceCrops - Process images in smaller batches
📚 Additional Resources
- DeepFace Documentation
- Apify Documentation
- Face Recognition Best Practices
- Privacy Regulations Guide
🤝 Support
For issues, questions, or feature requests:
- Open an issue on GitHub
- Contact Apify support
- Check the Apify Community Forum
📝 License
Apache-2.0
Built with ❤️ using Apify and DeepFace