Welcome To The Jungle Jobs Scraper avatar

Welcome To The Jungle Jobs Scraper

Pricing

from $1.00 / 1,000 results

Go to Apify Store
Welcome To The Jungle Jobs Scraper

Welcome To The Jungle Jobs Scraper

Extract job listings effortlessly with the Jungle Job Scraper. This lightweight actor is designed for fast and efficient data extraction from Jungle. For optimal stability and to avoid blocking, using residential proxies is highly recommended. Start scraping today!

Pricing

from $1.00 / 1,000 results

Rating

4.5

(4)

Developer

Shahid Irfan

Shahid Irfan

Maintained by Community

Actor stats

4

Bookmarked

37

Total users

17

Monthly active users

7 days ago

Last modified

Share

Extract, collect, and monitor job listings from Welcome to the Jungle at scale. Gather structured job data including titles, companies, locations, contract types, and optional full descriptions. Built for fast research, hiring intelligence, and recurring job-market tracking workflows.

Features

  • Fast job extraction — Collect listings quickly for immediate analysis.
  • Smart reliability fallback — Continues collecting data even when source conditions change.
  • Flexible filtering — Narrow results by keyword, country code, contract type, and remote options.
  • Optional detail enrichment — Add full job descriptions and additional metadata when needed.
  • Deduplicated output — Keeps dataset clean by avoiding duplicate job records.
  • Ready-to-use exports — Download results in multiple formats for reporting and automation.

Use Cases

Recruitment Intelligence

Track hiring activity by role and region to understand where companies are investing. Build recurring snapshots for trend analysis and planning.

Job Board Aggregation

Collect targeted listings for niche job boards or internal talent portals. Keep your listings fresh with scheduled runs and consistent output.

Labor Market Research

Analyze role distribution, remote policies, and contract patterns across countries. Use structured data to support market reports and strategic decisions.

Career Opportunity Monitoring

Create filtered datasets for specific job titles and locations. Power custom alerts and opportunity dashboards for teams or communities.


Input Parameters

ParameterTypeRequiredDefaultDescription
keywordStringNo""Job title, skill, or search keyword.
locationStringNo""Two-letter country code such as US, GB, FR, or DE.
contract_typeArray[String]No[]Contract filters: full_time, part_time, internship, apprenticeship, freelance, fixed_term.
remoteArray[String]No[]Remote filters: fulltime, partial, punctual, no.
collectDetailsBooleanNofalseWhen enabled, adds extended job description fields.
results_wantedIntegerNo20Maximum number of job listings to collect.
max_pagesIntegerNo5Maximum result pages to process for one run.
proxyConfigurationObjectNo{ "useApifyProxy": false }Proxy settings for reliability and access control.

Output Data

Each dataset item can include the following fields:

FieldTypeDescription
job_idStringStable identifier for the job record.
titleStringJob title.
companyStringCompany name.
company_slugStringCompany slug when available.
locationStringJob location text.
countryStringCountry value when available.
contract_typeStringContract type value.
remoteStringRemote-work value.
salaryStringSalary range or amount when available.
date_postedStringPosting date/time when available.
urlStringDirect URL of the job listing.
tagsArray[String]Job-related tags or categories.
description_htmlStringHTML description (when detail collection is enabled).
description_textStringClean text description (when detail collection is enabled).
employment_typeStringEmployment type from detail page when available.
_sourceStringCollection path marker for traceability.
_fetched_atStringISO timestamp of data collection.
_pageIntegerSource page number when available.

Usage Examples

Basic Extraction

Collect a small dataset for a quick check:

{
"keyword": "software engineer",
"location": "US",
"results_wanted": 20,
"max_pages": 5,
"collectDetails": false
}

Remote-First Roles

Find remote-friendly product jobs in the United Kingdom:

{
"keyword": "product manager",
"location": "GB",
"remote": ["fulltime", "partial"],
"results_wanted": 50,
"max_pages": 8,
"collectDetails": false
}

Deep Research Dataset

Build a richer dataset with full descriptions:

{
"keyword": "data scientist",
"location": "FR",
"contract_type": ["full_time", "fixed_term"],
"results_wanted": 80,
"max_pages": 12,
"collectDetails": true,
"proxyConfiguration": {
"useApifyProxy": true,
"apifyProxyGroups": ["RESIDENTIAL"]
}
}

Sample Output

{
"job_id": "senior-data-scientist-abc123",
"title": "Senior Data Scientist",
"company": "TechVision",
"company_slug": "techvision",
"location": "Paris, Ile-de-France, France",
"country": "France",
"contract_type": "full_time",
"remote": "partial",
"salary": "65000-85000 EUR",
"date_posted": "2026-02-10T09:15:00.000Z",
"url": "https://www.welcometothejungle.com/en/companies/techvision/jobs/senior-data-scientist-abc123",
"tags": ["Data", "Machine Learning", "Python"],
"description_text": "You will build and deploy machine learning solutions...",
"_source": "algolia",
"_fetched_at": "2026-02-18T08:21:13.452Z"
}

Tips for Best Results

Start with a Focused Query

  • Use specific role names like backend engineer instead of broad terms.
  • Add location filters early to reduce irrelevant records.

Scale Progressively

  • Test with results_wanted: 20 first.
  • Increase limits after validating output quality for your use case.

Use Detail Enrichment Strategically

  • Keep collectDetails off for fast monitoring runs.
  • Enable it for monthly deep-dive reporting and analysis.

Proxy Configuration

For high reliability, especially on larger runs:

{
"proxyConfiguration": {
"useApifyProxy": true,
"apifyProxyGroups": ["RESIDENTIAL"]
}
}

Integrations

Connect your data with:

  • Google Sheets — Build shared hiring dashboards.
  • Airtable — Store and filter job intelligence in custom views.
  • Slack — Send role-specific updates to hiring channels.
  • Make — Automate enrichment and routing workflows.
  • Zapier — Trigger downstream actions without code.
  • Webhooks — Push fresh data into your own services.

Export Formats

  • JSON — API pipelines and custom applications.
  • CSV — Spreadsheet analysis and reporting.
  • Excel — Business-ready reporting packs.
  • XML — Legacy system integrations.

Frequently Asked Questions

How many jobs can I collect in one run?

Set results_wanted to your target volume. Start small for testing, then scale based on runtime and data needs.

What happens if I leave all filters empty?

The actor collects broad job results from the default scope, limited by results_wanted and max_pages.

Can I collect full job descriptions?

Yes. Set collectDetails to true to enrich each listing with detailed description fields.

Is the output deduplicated?

Yes. Duplicate job entries are filtered during collection to keep the dataset clean.

Which location format should I use?

Use two-letter country codes such as US, GB, FR, or DE.

What if I run with empty input?

If no input is provided, the actor automatically falls back to INPUT.json values. If that file is unavailable, built-in defaults are used.


Support

For issues or feature requests, contact support through the Apify Console.

Resources


This actor is intended for legitimate data collection. You are responsible for complying with applicable laws, regulations, and website terms. Use collected data responsibly.