
Contact Details Scraper
Pricing
Pay per event

Contact Details Scraper
Free email extractor and lead scraper to extract and download emails, phone numbers, Facebook, Twitter, LinkedIn, and Instagram profiles from any website. Extract contact information at scale from lists of URLs and download the data as Excel, CSV, JSON, HTML, and XML.
4.0 (36)
Pricing
Pay per event
443
Total users
31.7k
Monthly users
1.6k
Runs succeeded
>99%
Issues response
2.7 days
Last modified
6 days ago
Is there a way to have this scraper look a little closer?
Closed
Is there any reason this run didn't extract the phone number? https://www.hooksandtinestaxidermy.com/
The phone number is on this page twice, but wasn't extracted. Any help would be appreciated.

Ondrej Klinovský (ondrejklinovsky)
Hi,
thanks for the question. You'll find the number in phonesUncertain
field in the output (click on "All fields" button to see it).
We store phone numbers in two fields: phones
and phonesUncertain
. Here's the explanation of the two from the crawlee's source code:
Note that the
phones
field contains phone numbers extracted from the special phone links such as[call us](tel:+1234556789)
(see {@apilink phonesFromUrls}) and potentially other sources with high certainty, whilephonesUncertain
contains phone numbers extracted from the plain text, which might be very inaccurate.
key.workflow
Ahhh got it, thanks.
key.workflow
Actually one more thing. I ran another one and it failed to return the email on the contact page, it returned a generic godaddy.com email. Would I have to adjust the maxDepth for it to reach the contact page?
key.workflow
It failed to return an email and phone number from this website too. https://www.gyblv.com/
Are these considered plain text? Should I be changing any settings? Thank you.

Ondrej Klinovský (ondrejklinovsky)
I ran another one and it failed to return the email on the contact page, it returned a generic godaddy.com email. Would I have to adjust the maxDepth for it to reach the contact page?
Try to increase maxRequestsPerStartUrl
in your input (or remove it completely). In your run it was set to 1
, meaning the actor will always scrape just one page regardless of maxDepth
. That should solve the issue with https://www.gyblv.com/ as well.