Facebook Reels Scraper avatar
Facebook Reels Scraper

Pricing

Pay per usage

Go to Apify Store
Facebook Reels Scraper

Facebook Reels Scraper

Developed by

Neuro Scraper

Neuro Scraper

Maintained by Community

Facebook Reels extractor that uses yt-dlp to detect and normalize Facebook Reels into a clean JSON schema. Works as an Apify Actor or locally, supports cookies, proxies and maxItems, and writes consolidated output for downstream pipelines.

0.0 (0)

Pricing

Pay per usage

0

7

7

Last modified

20 hours ago

πŸ“Ή Facebook Reels Scraper Actor

Effortlessly extract structured metadata from Facebook Reels directly on the Apify platform.


πŸ“– Summary

This Actor takes one or more Facebook Reels URLs and produces structured JSON with details such as title, uploader, view counts, dates, and more. Only Reels are processed β€” other video types are skipped.


πŸ’‘ Use cases

  • Social media analytics dashboards
  • Content performance tracking
  • Archiving structured metadata for further research
  • Automating reporting on trending Reels

⚑ Quick Start (Apify Console)

  1. Go to your Actor in Apify Console.

  2. Click Run.

  3. In the Input tab, paste JSON like:

    {
    "startUrls": [
    {"url": "https://www.facebook.com/reel/1234567890123456"}
    ],
    "maxItems": 10
    }
  4. Click Run β€” results will appear in the default Dataset.


⚑ Quick Start (CLI & API)

CLI (apify-cli)

$apify run -p input.json

Where input.json contains:

{
"startUrls": [
{"url": "https://www.facebook.com/reel/1234567890123456"}
]
}

API (apify-client in Python)

from apify_client import ApifyClient
client = ApifyClient('<APIFY_TOKEN>')
run = client.actor('<ACTOR_ID>').call(run_input={
"startUrls": [{"url": "https://www.facebook.com/reel/1234567890123456"}],
"maxItems": 5
})
# Fetch dataset items
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
print(item)

πŸ“ Inputs

  • startUrls (array of objects or strings) β€” List of Facebook Reel URLs. Required.
  • cookiesFile (string, optional) β€” Path to uploaded cookies file. Useful if login is required.
  • proxyConfiguration (object, optional) β€” Proxy settings as provided in Apify Console.
  • maxItems (integer, optional) β€” Maximum number of items to scrape.

βš™οΈ Configuration

πŸ”‘ NameπŸ“ Type❓ Requiredβš™οΈ DefaultπŸ“Œ ExampleπŸ“ Notes
startUrlsarrayβœ… Yesnull[ {"url": "https://facebook.com/reel/..."} ]URLs of Reels to scrape
cookiesFilestring❌ Nonullcookies.txtUpload via Apify key-value store
proxyConfigurationobject❌ No{}{ "useApifyProxy": true }Configure via Console proxy tab
maxItemsinteger❌ No0 (no limit)50Limit results processed
ALL_RESULTSdatasetAuton/aDataset tabFull consolidated results stored in key ALL_RESULTS

➑️ Example: In Apify Console β†’ Input, paste:

{
"startUrls": [ {"url": "https://www.facebook.com/reel/1234567890123456"} ]
}

πŸ“€ Outputs

  • Each Reel produces a JSON object with fields like:
{
"platform": "facebook",
"content_type": "reel",
"webpage_url": "https://www.facebook.com/reel/1234567890123456",
"id": "1234567890123456",
"title": "Sample Reel",
"uploader": "Page Name",
"view_count": "2.3K",
"like_count": 150,
"comment_count": 20,
"timestamp_iso": "2025-09-10T12:34:56Z",
"thumbnail": "https://...jpg"
}
  • Consolidated results are stored under key ALL_RESULTS in the Key-Value Store.

πŸ”‘ Environment variables

  • APIFY_TOKEN β€” Required to call the Actor via API or CLI.
  • HTTP_PROXY / HTTPS_PROXY β€” Only if using custom external proxies.

▢️ How to Run

In Apify Console

  1. Open Actor β†’ Run.
  2. Configure input JSON in Input tab.
  3. Click Run.

CLI

$apify call <ACTOR_ID> -p input.json

API

See the Python example above under Quick Start (API).


⏰ Scheduling & Webhooks

  • In Console β†’ Schedule, set periodic runs (e.g., every hour).
  • Add webhooks in Console β†’ Webhooks to notify when a run succeeds/fails.

🐞 Logs & Troubleshooting

  • View logs in the Run detail page.

  • Common issues:

    • No startUrls provided. β†’ Ensure startUrls field is set.
    • Empty dataset β†’ The URL was not a Reel, or access required login/cookies.

πŸ”’ Permissions & Storage

  • Results go to the default Dataset and ALL_RESULTS key in the Key-Value Store.
  • If using cookies, store them securely in Apify key-value storage.

πŸ†• Changelog / Versioning

  • Increment Actor version when input/output schema changes.

πŸ“Œ Notes / TODOs

  • TODO: Confirm if cookiesFile must be uploaded to default key-value store or passed differently.
  • TODO: Clarify maximum recommended startUrls per run (performance consideration).

🌍 Proxy configuration

  • In Apify Console β†’ Run β†’ Proxy, enable Apify Proxy.

  • To use custom proxies: in Actor settings β†’ Environment variables, add:

    • HTTP_PROXY = http://<USER>:<PASS>@<HOST>:<PORT>
    • HTTPS_PROXY = http://<USER>:<PASS>@<HOST>:<PORT>
  • Never hardcode credentials β€” store them as secrets.

  • TODO: Advanced proxy rotation patterns may be added.


πŸ“š References


🧐 What I inferred from main.py

  • Actor strictly processes Facebook Reels, skips other videos.
  • Inputs: startUrls, cookiesFile, proxyConfiguration, maxItems.
  • Outputs pushed to Dataset and consolidated in Key-Value store under ALL_RESULTS.
  • Network requests are made β†’ included Proxy configuration section.
  • Assumptions marked TODO for cookies handling and max startUrls batch size.