馃敟 Linkedin Companies & Profiles Bulk Scraper avatar
馃敟 Linkedin Companies & Profiles Bulk Scraper
Try for free

2 days trial then $29.00/month - No credit card required now

View all Actors
馃敟 Linkedin Companies & Profiles Bulk Scraper

馃敟 Linkedin Companies & Profiles Bulk Scraper

bebity/linkedin-premium-actor
Try for free

2 days trial then $29.00/month - No credit card required now

Companies & Profiles Linkedin scraper. Get comprehensive profiles of individuals and companies based on your keywords and filters. Unleash the power of data! 馃寪馃攳

User avatar

Scraper failing when having multiple concurrent runs

Open

Bronto-Prod opened this issue
21 days ago

Hi Bebity, what's the limit of concurrently running jobs? We're experiencing errors when there more more than ~5 concurrent runs. What's the maximum and is this shared across all the users?

User avatar

Bebity (bebity)

21 days ago

Hi Bronto-Prod,

The issue you're experiencing with the scraper failing when there are multiple concurrent runs is likely related to the previous outage. Regarding your question about the limit of concurrently running jobs, it's typically shared across all users and varies depending on the resources allocated to our platform.

Could you please let us know how many instances you would like to run concurrently? We'll do our best to accommodate your needs within the available resources.

Thank you for bringing this to our attention, and we apologize for any inconvenience caused.

Best regards, The Bebity Team

User avatar

Bronto-Prod

21 days ago

Hi Bebity,

it very much depends on the load we have. Sometime all we need is a few runs a day and sometimes we have a lot higher usage we need to accommodate. That being said for example we're running some data migrations now and the more concurrent jobs we can spin up, the better so that we're able to decrease the total run time.

So that's why I'm asking if there is some number we should not exceed or on the other hand is the ideal number to use.

User avatar

Bebity (bebity)

21 days ago

Thank you for clarifying the context. We understand the importance of adapting to different levels of usage, especially when migrating data.

Regarding your question about the maximum number of tasks running simultaneously, we currently have 10 shared instances available for all users on Apify. When you encounter the message 鈥淓rror loading configuration鈥, it means that there are no more instances available to process your request / or in a rare case like yesterday, a failure of our services following a linkedin update.

To solve this problem and ensure smoother operation during peak usage periods, we can implement a retry mechanism that will continually attempt to process your request until an instance becomes available, instead of immediately returning an error. This approach mitigates the impact of instance limitations and provides a better user experience, but may cost you more money as the run continues to run.

Please let us know if you would like us to proceed with the implementation of this retry mechanism, and we will be happy to discuss this further with our team.

Thank you for your understanding and cooperation.

User avatar

Bronto-Prod

20 days ago

Thanks for the response Bebity, If I could suggest something, the ideal scenario would probably be for the run to fail. If the run status is failed and not succeeded, we can act accordingly and reschedule the run. But currently it returns a success with 0 results which can be a valid response in case there was a small number of profiles we were looking for and none were found.

User avatar

Bebity (bebity)

19 days ago

Okay, we've just added it to the to-do list. We will comment here once it's done.

Regards

Developer
Maintained by Community
Actor metrics
  • 254 monthly users
  • 1 star
  • 98.2% runs succeeded
  • 18 days response time
  • Created in Jul 2023
  • Modified 6 days ago