![Contact Details Scraper avatar](https://images.apifyusercontent.com/PC94RcvKFvjFK-g4EPfr4k75OAAP9I1wBbjzkjTFNiE/rs:fill:92:92/aHR0cHM6Ly9hcGlmeS1pbWFnZS11cGxvYWRzLXByb2QuczMuYW1hem9uYXdzLmNvbS85U2s0SkpoRW1hOXZCS3FyZy9IUHJmV1dRa2dvc3RrR29qNi1jb250YWN0LmpwZw.webp)
No credit card required
![Contact Details Scraper](https://images.apifyusercontent.com/PC94RcvKFvjFK-g4EPfr4k75OAAP9I1wBbjzkjTFNiE/rs:fill:92:92/aHR0cHM6Ly9hcGlmeS1pbWFnZS11cGxvYWRzLXByb2QuczMuYW1hem9uYXdzLmNvbS85U2s0SkpoRW1hOXZCS3FyZy9IUHJmV1dRa2dvc3RrR29qNi1jb250YWN0LmpwZw.webp)
Contact Details Scraper
No credit card required
Free email extractor to extract and download emails, phone numbers, Facebook, Twitter, LinkedIn, and Instagram profiles from any website. Extract contact information at scale from lists of URLs and download the data as Excel, CSV, JSON, HTML, and XML.
Wasting resources on retries
Closed
I've noticed that when scraping, most resources are spent on links that are inaccessible, or take longer to access. Each of those links has been attempted to be accessed multiple times. Is it possible to limit the number of attempts (it's set to 4 right now) or shorten the waiting time to access the page?
![zuzka avatar](https://apify-image-uploads-prod.s3.amazonaws.com/Zji7Rt6MKGCn6Ae6A/echDcdqqFXKuFJRiQ-2018-12_%285%29.jpg)
Hey, we used to have it in input schema, will check if we still use it in the code.
![lukaskrivka avatar](https://apify-image-uploads-prod.s3.amazonaws.com/mPSyG35Lffj5ybtgz/3xNTfQWj8svZAjh5r-bigger_photo.jpg)
Hello,
Sorry for the late update. You can add maxRequestRetries: 1
to the JSON input.
- 1.3k monthly users
- 67 stars
- 98.0% runs succeeded
- 8.9 days response time
- Created in May 2019
- Modified about 1 month ago