🔥 LinkedIn Jobs Scraper
3 days trial then $29.99/month - No credit card required now
This Actor may be unreliable while under maintenance. Would you like to try a similar Actor instead?
See alternative Actors🔥 LinkedIn Jobs Scraper
3 days trial then $29.99/month - No credit card required now
ℹ️ Designed for both personal and professional use, simply enter your desired job title and location to receive a tailored list of job opportunities. Try it today!
Hi, am I missing something, why only 50?
i have the same question, and unfortunately could not find any information on restrictions. I run an Actor for a tech position for a big EU country and got only 49 results. Where are the other postings from currently 1,231 results (from logged in web search on LinkedIn). could someone clarify, if I am doing something wrong?
Hi there, can you please share your concerned run ?
my run id is K7vr2Bak51bmbdbX3
Hi everyone,
It seems like the reason you're getting only 50 results is due to the way the rows parameter is set in your input.
By default, it’s set to 50, but you can increase this number to retrieve more results, with a maximum of 1000.
Here’s how you can adjust your input like this : { "location": "Germany", "proxy": { "useApifyProxy": true, "apifyProxyGroups": [ "RESIDENTIAL" ] }, "title": "Cloud Architect", "publishedAt": "", "rows": 1000 }
I've also created a corrected run with the rows set to 1000.
You can check it out here: https://console.apify.com/view/runs/RjW2TNzDhuQXBQYoK
Let us know if this solves the issue!
Best, Bebity
Thanks for clarifying it! It might have been a glitch between the form fields value in the frontend app and the input parameters in JSON , that is passed to the backend. On the second try i could retrieve. over 50 result, as expected.
But still didn't understand the results amount: 830 results for 931 requests for 1000 rows. In the logs i can see a bunch of 423 codes, almost every second line in around 400 lines log. does this mean that 101 requests were blocked (931-830) ? Any suggestions on overcoming this (already used residential ips)
The next question is , how to overcome 1000 rows limit. could you probably propose a strategy for batch processing over the limit, On amazon it can be a price range. But this is LinkedIn. Any ideas other than trying different keywords on position, date range and location. My goal is to exclude same results in consequent runs.
Hi again,
Thanks for the update! Let me clarify a few points:
-
429 Status Code (Request Blocked): The retryCount in the log represents how many times the system is retrying after receiving a 429 (Too Many Requests) error from LinkedIn. This is normal behavior and ensures that the system keeps trying until it successfully retrieves the content. So, the fact that there are more requests than results (931 requests for 830 results) is expected and not an issue. The warnings are just retries and can be safely ignored, as they are not errors.
-
1000 Results Limit: LinkedIn’s public API has a maximum limit of 1000 jobs per query. Unfortunately, there is no way to exceed this limit directly. To work around this, you can split your searches by varying the parameters like location, job title, or date range.
-
Avoiding Duplicate Results: One effective strategy to avoid duplicate results in consecutive runs is to use the publishedAt field. You can filter jobs posted within the last 24 hours, the past week, or the last month to focus on newer job postings and avoid retrieving the same results.
-
Unique Job IDs: Each job listing has a unique job_id, which will help you track and exclude duplicates across multiple runs. By filtering out results with the same job_id, you can avoid processing the same job listing more than once.
Let me know if this helps, and feel free to reach out with any further questions!
Best,
Hi, Thank you for clarifying this.
- as I got it, after 429 Status Code there is a retry with a different IP.
- got you
- got you
- do you mean the exclusion of Unique Job ID duplicates in post-processing (the data is retrieved, concatenated, then duplicates are excluded), right? This is a good hint. Thank you.
Regards,
Exactly! After a 429 Status Code, the system retries with a different IP to access the URL.
And yes, I meant the exclusion of duplicate job_ids during post-processing. Once the data is retrieved and concatenated, you can filter out duplicates based on the unique job_ids. This way, you ensure you're only working with unique job listings.
Glad I could help!
I'll close this issue, but feel free to respond or open another one if you encounter any other problems.
Regards,
Actor Metrics
562 monthly users
-
78 stars
>99% runs succeeded
11 days response time
Created in Feb 2023
Modified 4 days ago