Pay $5.00 for 1,000 Campaigns
Facebook Ads Scraper
Pay $5.00 for 1,000 Campaigns
Extract advertising data from one or multiple Facebook Pages. Get page details, reach estimates, publisher platforms, report count, number of impressions, ad IDs, timestamps, and more. Download Facebook ads data in JSON, CSV, and Excel and use it in apps, spreadsheets, and reports.
Hi, how can I use residential proxy in this scraper?
Hi! Internally actor already enforcing to use groups: ['RESIDENTIAL']
so just run it, no additional actions required.
I then have 2 questions:
- If residential proxy is turned on by default, then why am I not charged for it? It costs $13 / GB, and I think If this actor was really using residential proxy, then it would charge me
- The reason why I want to check usage of residential proxies is because the agent works poorly for me. I put in the link from the meta ad library, and meta ad library shows me that there are 6 700 results for the given query. But the actor gives only 2500 - 3500 results ( I ran it 2 times). I get the following errors in the log before the actor stops: 2023-10-16T14:19:53.435Z ERROR CheerioCrawler: Request failed and reached maximum retries. Error: 500 - Internal Server Error: What can I do to fix it and make sure actor gathers as many data as possible?
Hi!
- Actor is very savvy on traffic, so probably for single run you will not notice chargeable amount and for PayPerResult consumed traffic is not displayed in run details.
- We never expecting exact results by counter because actor not using login and what actually displayed in Fb Ads Lib without login is limited. Under browser you can see counter but you will not be able to scroll any amount of results, there is always limit, actor error mentioned by you means limit is reached. Solution might be to use different queries with less total results to get desired amount of ads by several URLs.
Hi, thanks!
So, which of the following ways gives more options:
- Put a lot of links in the actor and run it
- Give it only one link and run it one by one (each link - new run)
Would be thankful for receiving a reply
Hi! Recommended option 1 because it will work faster (batch of URLs scaled based on available RAM, so 10 URLs in a single run scraped faster than 10 runs with 1 URL each).
- 308 monthly users
- 100.0% runs succeeded
- 1.2 days response time
- Created in Apr 2023
- Modified 6 days ago