Facebook Marketplace Search avatar

Facebook Marketplace Search

Pricing

from $10.00 / 1,000 results

Go to Apify Store
Facebook Marketplace Search

Facebook Marketplace Search

Extract listings from **[Facebook Marketplace](https://www.facebook.com/marketplace)** search results at scale.

Pricing

from $10.00 / 1,000 results

Rating

0.0

(0)

Developer

Tin

Tin

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

1

Monthly active users

a day ago

Last modified

Categories

Share

Facebook Marketplace Scraper

Extract listings from Facebook Marketplace search results at scale. The scraper collects titles, prices, locations, photos, and listing URLs — without requiring a Facebook account (or optionally with one for more results). Run it on the Apify platform with built-in proxy rotation, scheduling, and export to JSON, CSV, or Excel.

What does Facebook Marketplace Scraper do?

This Actor opens Facebook Marketplace search URLs, extracts the pre-rendered (SSR) listing data embedded in the page HTML, then scrolls to the bottom to load additional results via Facebook's GraphQL API. It deduplicates across both sources so every item appears exactly once.

Try it by pasting any Marketplace search URL — e.g. https://www.facebook.com/marketplace/newyork/search/?query=iphone — and setting a maxItems limit.

Why use Facebook Marketplace Scraper?

  • Market research — monitor prices for specific product categories across cities or countries
  • Resale & arbitrage — automate deal-finding for goods sold below market value
  • Real estate leads — track rental and property listings before they sell
  • Competitive intelligence — watch how competitors list and price their inventory
  • Data science — build price trend datasets for analysis or ML training

How to use Facebook Marketplace Scraper

  1. Go to Facebook Marketplace and search for what you want to scrape.
  2. Copy the search results URL from your browser's address bar.
  3. Open the Actor's Input tab and paste the URL into Start URLs.
  4. Set Max items to limit how many listings are collected (default: 10).
  5. Optionally choose a Proxy Country to get results from a specific region.
  6. Click Start and wait for the run to finish.
  7. Download your results from the Output tab in JSON, CSV, or Excel.

Input

Configure the Actor from the Input tab in Apify Console, or provide a JSON input:

FieldTypeDescriptionDefault
startUrlsarrayFacebook Marketplace search URLs to scrape(required)
maxItemsintegerMaximum number of listings to collect10
countryCodestringProxy exit country (US, DE, VN, FR, GB)US
loginCookiesarrayBrowser cookies for an authenticated Facebook session

Example input:

{
"startUrls": [
{ "url": "https://www.facebook.com/marketplace/losangeles/search/?query=macbook" }
],
"maxItems": 50,
"countryCode": "US"
}

Using login cookies (optional)

To access more results or region-restricted listings, you can log in via cookies:

  1. Install the EditThisCookie v3 Chrome extension.
  2. Log in to Facebook in your browser.
  3. Export your cookies using the extension and paste the JSON array into loginCookies.

Note: When login cookies are used, the Actor runs with concurrency 1 to avoid triggering Facebook's account protection.

Output

Results are saved to the default Dataset. Each item corresponds to one Marketplace listing.

Example output (2 items):

[
{
"facebookUrl": "https://www.facebook.com/marketplace/losangeles/search/?query=macbook",
"listingUrl": "https://www.facebook.com/marketplace/item/1234567890123456",
"id": "1234567890123456",
"name": "MacBook Pro 14\" M3 - Like New",
"listing_price": {
"amount": "1200",
"currency": "USD",
"formatted_amount": "$1,200"
},
"location": {
"reverse_geocode": {
"city": "Los Angeles",
"state": "California"
}
},
"primary_listing_photo": {
"image": {
"uri": "https://scontent.xx.fbcdn.net/v/..."
}
}
},
{
"facebookUrl": "https://www.facebook.com/marketplace/losangeles/search/?query=macbook",
"listingUrl": "https://www.facebook.com/marketplace/item/9876543210987654",
"id": "9876543210987654",
"name": "MacBook Air M2 Space Gray 256GB",
"listing_price": {
"amount": "750",
"currency": "USD",
"formatted_amount": "$750"
},
"location": {
"reverse_geocode": {
"city": "Santa Monica",
"state": "California"
}
},
"primary_listing_photo": {
"image": {
"uri": "https://scontent.xx.fbcdn.net/v/..."
}
}
}
]

You can download the dataset in various formats such as JSON, HTML, CSV, or Excel.

Data fields

FieldDescription
facebookUrlThe Marketplace search URL that was scraped
listingUrlDirect link to the individual listing page
idFacebook's unique listing ID
nameListing title
listing_price.amountPrice as a numeric string
listing_price.currencyCurrency code (e.g. USD)
listing_price.formatted_amountHuman-readable price (e.g. $1,200)
location.reverse_geocode.cityCity where the listing is located
location.reverse_geocode.stateState/region of the listing
primary_listing_photo.image.uriURL of the listing's main photo

Pricing / Cost estimation

This Actor uses Puppeteer (headless Chrome) and residential proxies, which are more expensive than HTTP-only scrapers. Typical costs on Apify:

  • ~0.10–0.20 compute units per run for 50–100 listings
  • Residential proxy bandwidth adds cost depending on page size (~1–3 MB per search page)

The Apify Free plan includes $5/month in free usage, enough for hundreds of listings. For large-scale or recurring scrapes, a paid plan is recommended.

Tips and advanced options

  • Scrape multiple cities — add several search URLs in startUrls, each targeting a different location (e.g. /marketplace/chicago/, /marketplace/houston/).
  • Filter by category — Facebook Marketplace URLs support category filters. Apply them in the browser first, then copy the filtered URL.
  • Set maxItems conservatively — for price monitoring use cases, 20–50 items per run keeps costs low. The scraper scrolls until the limit is reached, avoiding unnecessary network requests.
  • Schedule recurring runs — use Apify's built-in scheduler to run the Actor daily or hourly and detect new listings automatically.
  • Export to Google Sheets — use the Apify Google Sheets integration to push results directly to a spreadsheet.

FAQ, disclaimers, and support

Is it legal to scrape Facebook Marketplace? Web scraping publicly visible data is generally permitted for personal, research, and non-commercial purposes in many jurisdictions. However, Facebook's Terms of Service prohibit automated data collection. Use this Actor responsibly, respect rate limits, and do not store personal data without a legal basis. Always consult a legal professional for your specific use case.

Will it work without a Facebook account? Yes — the Actor scrapes publicly visible search results. Login cookies are optional and may increase the number of results visible.

Why are fewer results returned than maxItems? Facebook may limit results for unauthenticated sessions, or the search query may have fewer matching listings than the limit.

Known limitations

  • Facebook frequently changes its internal data structures; the scraper may need updates if extraction breaks.
  • Very high concurrency is not recommended — use the default single-session mode to avoid account flags.

Support Found a bug or need a custom solution? Open an issue on the ../../issues or contact us through Apify Console.

Included features

  • Puppeteer Crawler - simple framework for parallel crawling of web pages using headless Chrome with Puppeteer
  • Configurable Proxy - tool for working around IP blocking
  • Input schema - define and easily validate a schema for your Actor's input
  • Dataset - store structured data where each object stored has the same attributes
  • Apify SDK - toolkit for building Actors

How it works

  1. Actor.getInput() gets the input from INPUT.json where the start urls are defined

  2. Create a configuration for proxy servers to be used during the crawling with Actor.createProxyConfiguration() to work around IP blocking. Use Apify Proxy or your own Proxy URLs provided and rotated according to the configuration. You can read more about proxy configuration here.

  3. Create an instance of Crawlee's Puppeteer Crawler with new PuppeteerCrawler(). You can pass options to the crawler constructor as:

    • proxyConfiguration - provide the proxy configuration to the crawler
    • requestHandler - handle each request with custom router defined in the routes.js file.
  4. Handle requests with the custom router from routes.js file. Read more about custom routing for the Cheerio Crawler here

    • Create a new router instance with new createPuppeteerRouter()

    • Define default handler that will be called for all URLs that are not handled by other handlers by adding router.addDefaultHandler(() => { ... })

    • Define additional handlers - here you can add your own handling of the page

      router.addHandler('detail', async ({ request, page, log }) => {
      const title = await page.title();
      // You can add your own page handling here
      await Dataset.pushData({
      url: request.loadedUrl,
      title,
      });
      });
  5. crawler.run(startUrls); start the crawler and wait for its finish

Resources

If you're looking for examples or want to learn more visit:

Getting started

For complete information see this article. To run the Actor use the following command:

$apify run

Deploy to Apify

Connect Git repository to Apify

If you've created a Git repository for the project, you can easily connect to Apify:

  1. Go to Actor creation page
  2. Click on Link Git Repository button

Push project on your local machine to Apify

You can also deploy the project on your local machine to Apify without the need for the Git repository.

  1. Log in to Apify. You will need to provide your Apify API Token to complete this action.

    $apify login
  2. Deploy your Actor. This command will deploy and build the Actor on the Apify Platform. You can find your newly created Actor under Actors -> My Actors.

    $apify push

Documentation reference

To learn more about Apify and Actors, take a look at the following resources: