🔥 Linkedin Companies & Profiles Bulk Scraper
2 days trial then $29.00/month - No credit card required now
🔥 Linkedin Companies & Profiles Bulk Scraper
2 days trial then $29.00/month - No credit card required now
Companies & Profiles Linkedin scraper. Get comprehensive profiles of individuals and companies based on your keywords and filters. Unleash the power of data! 🌐🔍
Hi Bebity, what's the limit of concurrently running jobs? We're experiencing errors when there more more than ~5 concurrent runs. What's the maximum and is this shared across all the users?
Hi Bronto-Prod,
The issue you're experiencing with the scraper failing when there are multiple concurrent runs is likely related to the previous outage. Regarding your question about the limit of concurrently running jobs, it's typically shared across all users and varies depending on the resources allocated to our platform.
Could you please let us know how many instances you would like to run concurrently? We'll do our best to accommodate your needs within the available resources.
Thank you for bringing this to our attention, and we apologize for any inconvenience caused.
Best regards, The Bebity Team
Hi Bebity,
it very much depends on the load we have. Sometime all we need is a few runs a day and sometimes we have a lot higher usage we need to accommodate. That being said for example we're running some data migrations now and the more concurrent jobs we can spin up, the better so that we're able to decrease the total run time.
So that's why I'm asking if there is some number we should not exceed or on the other hand is the ideal number to use.
Thank you for clarifying the context. We understand the importance of adapting to different levels of usage, especially when migrating data.
Regarding your question about the maximum number of tasks running simultaneously, we currently have 10 shared instances available for all users on Apify. When you encounter the message “Error loading configuration”, it means that there are no more instances available to process your request / or in a rare case like yesterday, a failure of our services following a linkedin update.
To solve this problem and ensure smoother operation during peak usage periods, we can implement a retry mechanism that will continually attempt to process your request until an instance becomes available, instead of immediately returning an error. This approach mitigates the impact of instance limitations and provides a better user experience, but may cost you more money as the run continues to run.
Please let us know if you would like us to proceed with the implementation of this retry mechanism, and we will be happy to discuss this further with our team.
Thank you for your understanding and cooperation.
Thanks for the response Bebity, If I could suggest something, the ideal scenario would probably be for the run to fail. If the run status is failed and not succeeded, we can act accordingly and reschedule the run. But currently it returns a success with 0 results which can be a valid response in case there was a small number of profiles we were looking for and none were found.
Okay, we've just added it to the to-do list. We will comment here once it's done.
Regards
Actor Metrics
473 monthly users
-
80 stars
>99% runs succeeded
23 days response time
Created in Jul 2023
Modified 4 days ago