Facebook Pages Scraper avatar

Facebook Pages Scraper

Pricing

Pay per event

Go to Apify Store
Facebook Pages Scraper

Facebook Pages Scraper

Scrape public Facebook Pages — extract followers, likes, contact info, website, description, hours, and more. No login required. Powered by residential proxy for reliable access.

Pricing

Pay per event

Rating

0.0

(0)

Developer

Stas Persiianenko

Stas Persiianenko

Maintained by Community

Actor stats

0

Bookmarked

3

Total users

2

Monthly active users

3 days ago

Last modified

Categories

Share

Extract public data from Facebook Pages — followers, likes, contact info, website, description, and more. No login required. Works with any public Facebook Page URL.

🚀 What does it do?

Facebook Pages Scraper loads public Facebook Pages and extracts structured data from each page, including:

  • Page identity: name, page ID, URL, verified status
  • Engagement: follower count, like count, check-in count
  • Contact: website, email, phone number
  • Description: about text, category
  • Media: profile picture URL, cover photo URL
  • Location: address, city, state, country
  • Social links: Instagram, Twitter/X, YouTube
  • Recent posts (optional): text preview from latest posts

👥 Who is it for?

Market researchers

Track competitor Facebook pages, monitor follower growth, and analyze engagement metrics across brands and industries.

Lead generation agencies

Extract contact information (website, email, phone) from business Facebook pages at scale.

Social media managers

Audit Facebook Pages for clients — check follower counts, verify contact info, and benchmark against competitors.

Data analysts & BI teams

Build databases of brand presence on Facebook — who's verified, how big they are, what categories they're in.

PR and communications teams

Monitor brand pages and track their public-facing information across campaigns.

💡 Why use this scraper?

Facebook removed public API access to Page data in 2018/2019 (Graph API v3.0+). The only way to get Page data programmatically is through scraping. This tool:

  • ✅ Requires no Facebook account — works with public pages
  • ✅ Uses residential proxy — gets past Facebook's IP-based blocks
  • ✅ Extracts accurate follower counts directly from page rendering
  • ✅ Handles 100+ pages per run efficiently
  • ✅ Returns clean, structured JSON — no post-processing needed

📊 What data do you get?

FieldTypeDescription
namestringPage display name
pageIdstringFacebook numeric page ID
pageNamestringURL slug (username)
urlstringFull Facebook page URL
categorystringPage category (if available)
descriptionstringAbout text / meta description
followersnumberFollower count
likesnumberPage like count
checkinsnumberCheck-in count
verifiedbooleanBlue verification badge
websitestringLinked external website
emailstringContact email (if public)
phonestringContact phone (if public)
addressstringPhysical address (if listed)
citystringCity (from address)
statestringState/region
countrystringCountry
profilePictureUrlstringProfile photo URL
coverPhotoUrlstringCover photo URL
instagramUrlstringLinked Instagram
twitterUrlstringLinked Twitter/X
youtubeUrlstringLinked YouTube
priceRangestringPrice range ($ to $$$$)
recentPostsarrayRecent post previews (optional)
scrapedAtstringISO timestamp of scrape

💰 How much does it cost to scrape Facebook pages?

This actor uses Pay Per Event (PPE) pricing — you pay only for data you extract.

EventFREE tierBRONZESILVERGOLDPLATINUMDIAMOND
Run start$0.01$0.0095$0.0085$0.0075$0.006$0.005
Per page scraped$0.15$0.135$0.125$0.12$0.115$0.11

Why is Facebook scraping expensive? Facebook requires residential proxies (~$8/GB) to bypass IP blocks, and their JS-heavy pages send ~5–10MB of traffic per page through the proxy. This is the industry-standard cost for any Facebook scraper — the price reflects real infrastructure costs, not arbitrary markup.

Example costs:

  • 10 pages → ~$1.51 (start fee + 10 × $0.15)
  • 100 pages → ~$15.01
  • 500 pages → ~$75.01

🔧 How to scrape Facebook Pages

Step 1: Prepare your list of page URLs

Collect the Facebook page URLs you want to scrape:

  • Named pages: https://www.facebook.com/cocacola
  • Numeric IDs: https://www.facebook.com/profile.php?id=100064277875532

Step 2: Configure the actor

  1. Paste your page URLs into the Facebook page URLs field (one per line)
  2. Set Max pages to control how many to process
  3. Enable Include recent posts if you want post text previews
  4. Set Proxy country (US recommended)

Step 3: Run and download results

Click Start and results will appear in the Dataset tab as they're scraped.

⚙️ Input options

ParameterTypeDefaultDescription
pageUrlsarrayrequiredFacebook page URLs to scrape
maxPagesinteger50Maximum number of pages to process
includeRecentPostsbooleanfalseInclude up to 3 recent post previews
proxyCountrystringUSResidential proxy country code

Example input

{
"pageUrls": [
"https://www.facebook.com/cocacola",
"https://www.facebook.com/starbucks",
"https://www.facebook.com/McDonalds"
],
"maxPages": 50,
"includeRecentPosts": false,
"proxyCountry": "US"
}

📤 Output format

Each scraped page produces one JSON record in the dataset:

{
"pageId": "100064277875532",
"name": "Coca-Cola",
"pageName": "cocacola",
"url": "https://www.facebook.com/cocacola",
"category": "Soft Drink Company",
"description": "The Coca-Cola Facebook Page is a collection of your stories showing how people from around the world are refreshed by Coca-Cola.",
"website": "https://www.coca-cola.com/",
"email": null,
"phone": null,
"address": null,
"followers": 108000000,
"likes": 108174765,
"checkins": null,
"verified": true,
"profilePictureUrl": "https://scontent.xx.fbcdn.net/...",
"coverPhotoUrl": null,
"instagramUrl": null,
"twitterUrl": null,
"youtubeUrl": null,
"priceRange": null,
"recentPosts": [],
"scrapedAt": "2026-04-06T01:00:00.000Z"
}

💡 Tips for best results

Getting more data

  • Category: visible on About section for business/brand pages
  • Contact info: only appears for business pages that have listed it publicly
  • Posts: enable includeRecentPosts for a preview of recent content

Performance tips

  • Process pages in batches of 50-100 for best efficiency
  • Use proxyCountry matching the target pages' primary audience

Page URL formats

  • Named: https://www.facebook.com/pageName
  • Numeric: https://www.facebook.com/100064277875532
  • With www: both work the same way

🔗 Integrations

Export to Google Sheets

Connect this actor to Google Sheets via Apify's integration:

  1. Run the actor
  2. In the Integrations tab, choose "Export to Google Sheets"
  3. Your dataset is automatically added to a spreadsheet

CRM automation

Feed page data into HubSpot, Salesforce, or any CRM via Zapier:

  1. Connect the actor via Zapier's Apify trigger
  2. Map fields to your CRM properties

Scheduled monitoring

Track Facebook page growth over time:

  1. Schedule the actor to run weekly/monthly
  2. Save each run's dataset with a timestamp
  3. Compare follower counts across runs

🤖 API usage

Node.js

import { ApifyClient } from 'apify-client';
const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });
const run = await client.actor('automation-lab/facebook-pages-scraper').call({
pageUrls: [
'https://www.facebook.com/cocacola',
'https://www.facebook.com/starbucks',
],
maxPages: 10,
proxyCountry: 'US',
});
const { items } = await client.dataset(run.defaultDatasetId).listItems();
console.log(items);

Python

from apify_client import ApifyClient
client = ApifyClient('YOUR_API_TOKEN')
run = client.actor('automation-lab/facebook-pages-scraper').call(run_input={
'pageUrls': [
'https://www.facebook.com/cocacola',
'https://www.facebook.com/starbucks',
],
'maxPages': 10,
'proxyCountry': 'US',
})
items = client.dataset(run['defaultDatasetId']).list_items().items
for item in items:
print(item['name'], item['followers'])

cURL

curl -X POST \
"https://api.apify.com/v2/acts/automation-lab~facebook-pages-scraper/runs?token=YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"pageUrls": ["https://www.facebook.com/cocacola"],
"maxPages": 5,
"proxyCountry": "US"
}'

🤖 MCP (Model Context Protocol)

Use this actor with AI assistants like Claude for natural language page scraping:

Claude Code (terminal)

$claude mcp add --transport http apify https://mcp.apify.com?tools=automation-lab/facebook-pages-scraper

Claude Desktop / Cursor / VS Code

Add to your MCP server config:

{
"mcpServers": {
"apify": {
"command": "npx",
"args": ["-y", "@apify/mcp-server"],
"env": {
"APIFY_TOKEN": "YOUR_TOKEN"
}
}
}
}

Then ask Claude:

  • "Scrape the Facebook page for Coca-Cola and give me their follower count and website"
  • "Get contact info from these 5 Facebook pages: [list URLs]"
  • "Which of these brands has the most Facebook followers?"

Or use the actor-specific MCP URL:

https://mcp.apify.com?tools=automation-lab/facebook-pages-scraper

This actor only accesses publicly available data from Facebook Pages — the same data anyone can view without logging in. We do not:

  • Access private profiles or locked content
  • Use login credentials
  • Scrape personal user data or friend lists
  • Violate Facebook's data protection rules for private data

Public business pages are designed to be seen. Scraping their public-facing information (name, contact details, follower counts) is lawful in most jurisdictions, though you should verify local laws apply to your use case. See our terms of service and consult legal counsel for commercial applications.

❓ FAQ

Why are some fields null?

Facebook uses JavaScript rendering for much of its content. Some data — especially category, phone, and email — only appears after the page fully renders. The scraper waits for rendering to complete, but for large brand pages these fields may not be publicly listed.

Why is the follower count rounded (e.g., 34,000,000 instead of 34,339,332)?

Facebook shows large counts abbreviated (e.g., "34M followers") on some page views, which we display as-is. The likes field typically shows the precise count from meta tags.

Can I scrape private Facebook pages?

No. This actor only works with public pages — those visible without a Facebook account.

Why does the scraper use residential proxy?

Facebook blocks datacenter IP ranges with a 302 redirect. Residential proxies bypass this at a modest cost.

I got "Facebook" as the page name for some URLs

This happens when the URL doesn't correspond to an actual public page, causing Facebook to redirect to its homepage. Check that your URLs are valid and the pages are public.

How many pages can I scrape per run?

There's no hard limit, but for cost and reliability we recommend runs of 50-200 pages. Use the maxPages setting to control batch size.