Upwork Scraper - Freelance Job Listings with Client Intel avatar

Upwork Scraper - Freelance Job Listings with Client Intel

Pricing

from $1.00 / 1,000 results

Go to Apify Store
Upwork Scraper - Freelance Job Listings with Client Intel

Upwork Scraper - Freelance Job Listings with Client Intel

Scrape Upwork jobs with full client intelligence — country, total spent, payment-verified, rating, reviews, and exact applicant count. 14 filters and incremental mode that emits only new or changed listings across runs.

Pricing

from $1.00 / 1,000 results

Rating

0.0

(0)

Developer

Black Falcon Data

Black Falcon Data

Maintained by Community

Actor stats

1

Bookmarked

29

Total users

18

Monthly active users

22 minutes ago

Last modified

Share

What does Upwork Scraper do?

Upwork Scraper extracts structured job data from upwork.com with the full client-quality panel — client country, total spent, payment-verified status, rating, review count — plus the exact number of applicants per job (not tier-buckets) and engagement metadata. Works out of the box with no cookies or login. Paste an Upwork search URL or configure 14 filters directly. Incremental mode emits only new or changed listings on recurring runs.

Key features

  • Client intelligence on every job — country, lifetime spend, payment-verified flag, rating, review count, and financial-privacy signal, so you can qualify leads before spending a single Connect.
  • Exact applicant count — precise integer per listing, not tier-buckets like "20-49". Combine with the proposals filter to find low-competition jobs the moment they're posted.
  • 14 filters out of the box — category, country or region, fixed-price budget range, hourly-rate range, project duration, payment-verified-only, proposals-range, contract-to-hire, experience level, workload, client-hires history, job type, sort order, and keyword.
  • Client-quality gates — minimum client total spent, minimum rating, minimum review count as one-click filters. Layer them on top of the custom-filter DSL (includes / equals / gte / lt operators on any of 30+ fields).
  • Incremental mode — recurring runs emit and charge only for listings that are new or whose tracked content changed. Paste a stateKey, run daily, pay for the diff.
  • Repost detection — content-hash matching flags reposts of previously expired listings across runs. Filter them out with skipReposts: true.
  • Paste a search URLsearchUrl accepts any upwork.com/nx/search/jobs/?q=... URL; all query-params auto-parse into filters. Explicit input fields override URL values.
  • Composite job-quality score (customJobScore) — a 0–5 rating computed transparently from payment verification, client rating, review reliability, and lifetime spend. Sort or threshold by it.
  • Rich post-fetch filters — include / exclude keywords with independent title / description / skills toggles, publish-time fromDate / toDate, maxAgeMinutes for real-time alerts, and region-grouped location filters (Europe, North America, English-speaking, etc.).
  • Detail enrichment (optional addon) — supply your own Upwork session cookie to unlock ~30 extra fields: client timezone, city, industry, company size, total hires, hire-rate, activity panel, screening questions, attachments, work history with bi-directional feedback, allowed countries, AI-generated-description flag, and more.
  • Notifications built in — Telegram, Discord, Slack, WhatsApp, and generic webhook support. Fire on new matching jobs without setting up a separate pipeline. Pairs with incrementalMode so you never get the same alert twice.
  • LLM-readycompact mode trims to core fields for MCP / agent context. descriptionMaxLength caps long briefs so they fit in a prompt window.
  • Country normalisation — all clientCountry values returned as canonical names, plus a separate clientCountryCode ISO-3 field for cross-source joins.
  • Zero setup — no cookies, logins, or manual configuration required. The optional detail-enrichment addon can unlock ~30 extra fields when you supply your own Upwork session.

What data can you extract from upwork.com?

Each result includes Core listing fields (jobId, title, jobType, experienceLevel, budgetAmount, budgetCurrency, hourlyBudgetMin, and hourlyBudgetMax, and more) and detail fields when enrichment is enabled (description, descriptionHtml, descriptionMarkdown, and skillsDetailed). In standard mode, all fields are always present — unavailable data points are returned as null, never omitted. In compact mode, only core fields are returned.

Input

The main inputs are a search keyword, an optional location filter, and a result limit. Additional filters and options are available in the input schema.

Key parameters:

  • query — Job search keywords. Leave empty to browse all jobs.
  • searchUrl — Paste a full Upwork search URL (https://www.upwork.com/nx/search/jobs/?q=...). Filters in the URL are auto-parsed and override individual fields.
  • jobType — Filter by payment type.
  • experienceLevel — Filter by required experience.
  • workload — Filter by time commitment.
  • sort — How to sort results. (default: "recency")
  • category — Upwork category name (e.g. "Web Development") or category2 UID, or an array of either.
  • location — Countries, regions, or structured entries. Accepts: country name ("United States"), ISO code ("US"), region name ("europe", "north_america", "english_speaking"), or {"type": "COUNTRY"|"REGION", "value": "..."}.
  • excludeLocations — Countries or regions to exclude. Same format as location.
  • budget — Filter fixed-price jobs by amount range (USD).
  • hourlyRate — Filter hourly jobs by rate (USD/hr). Format: "min-max", open-ended with trailing dash ("50-").
  • duration — Filter by expected length.
  • ...and 28 more parameters

Input examples

Basic search — Keyword-driven search with a result cap.

→ Full payload per result — all standard fields populated where the source provides them.

{
"query": "python developer",
"maxResults": 50
}

Filtered search — Narrow results with advanced filters — only matching jobs are returned.

→ Same field set as basic search; fewer, more relevant rows.

{
"query": "python developer",
"jobType": "hourly",
"experienceLevel": "EntryLevel",
"category": [
"Web Development"
],
"maxResults": 100
}

Incremental tracking — Only emit jobs that changed since the previous run with this stateKey.

→ First run builds the baseline state. Subsequent runs emit only records that are new or whose tracked content changed. Set emitUnchanged: true to include unchanged records as well.

{
"query": "python developer",
"maxResults": 200,
"incrementalMode": true,
"stateKey": "python-developer-tracker"
}

Compact filtered output — Combine filters with compact mode for a lightweight AI-agent or MCP data source.

→ Core fields only — ideal for piping into LLMs or downstream tools without token overhead.

{
"query": "python developer",
"jobType": "hourly",
"experienceLevel": "EntryLevel",
"maxResults": 50,
"compact": true
}

Output

Each run produces a dataset of structured job records. Results can be downloaded as JSON, CSV, or Excel from the Dataset tab in Apify Console.

Example job record

{
"jobId": "2047620102105297516",
"title": "Full Stack Software Engineer Needed for Contract Work",
"description": "Contract Software Engineer (AI Applications / High-Velocity Builder)\nLocation: Remote\nCompensation: $40–100/hour (contract)\nType: Project-based (20–60 hrs/week)\n\nAbout Us\nWe are a stealth AI company b...",
"descriptionHtml": null,
"descriptionMarkdown": "Contract Software Engineer (AI Applications / High-Velocity Builder)\nLocation: Remote\nCompensation: $40–100/hour (contract)\nType: Project-based (20–60 hrs/week)\n\nAbout Us\nWe are a stealth AI company b...",
"contentHash": "f6a9bbdcc53a7a571effa9c32e1d9008bea5c96d766c0d881b0818711568177a",
"jobType": "HOURLY",
"experienceLevel": "ExpertLevel",
"budgetAmount": null,
"budgetCurrency": null,
"hourlyBudgetMin": 40,
"hourlyBudgetMax": 100,
"weeklyRetainerBudget": null,
"engagementType": "FULL_TIME",
"engagementDuration": "3 to 6 months",
"engagementDurationWeeks": 18,
"skills": [
"JavaScript",
"React",
"Python",
"API",
"Web Development"
],
"skillsDetailed": [
{
"uid": "996364628025274383",
"parentSkillUid": null,
"name": "JavaScript",
"highlighted": false
},
{
"uid": "1031626773660942336",
"parentSkillUid": null,
"name": "React",
"highlighted": false
},
{
"uid": "996364628025274386",
"parentSkillUid": null,
"name": "Python",
"highlighted": false
},
{
"uid": "1110580482322976768",
"parentSkillUid": null,
"name": "API",
"highlighted": false
},
{
"uid": "1031626795211276288",
"parentSkillUid": null,
"name": "Web Development",
"highlighted": false
}
],
"publishTime": "2026-04-24T10:15:12.253Z",
"createTime": "2026-04-24T10:14:10.307Z",
"sourcingTimestamp": null,
"totalApplicants": 58,
"personsToHire": 50,
"enterpriseJob": false,
"premium": false
}

Incremental fields

When incremental: true, each record also carries:

  • changeType — one of NEW, UPDATED, UNCHANGED, REAPPEARED, EXPIRED.
  • firstSeenAt, lastSeenAt — ISO-8601 timestamps tracking the listing across runs.
  • isRepost, repostOfId, repostDetectedAt — populated when a new listing matches the tracked content of a previously expired one. Set skipReposts: true to drop detected reposts from the output.

How to scrape upwork.com

  1. Go to Upwork Scraper in Apify Console.
  2. Enter a search keyword and optional location filter.
  3. Set maxResults to control how many results you need.
  4. Click Start and wait for the run to finish.
  5. Export the dataset as JSON, CSV, or Excel.

Use cases

  • Extract job data from upwork.com for market research and competitive analysis.
  • Monitor new and changed listings on scheduled runs without processing the full dataset every time.
  • Feed structured data into AI agents, MCP tools, and automated pipelines using compact mode.
  • Export clean, structured data to dashboards, spreadsheets, or data warehouses.
  • Analyze skill demand across listings using structured skill tags.
  • Collect ratings and reviews for reputation monitoring and benchmarking.

How much does it cost to scrape upwork.com?

Upwork Scraper uses pay-per-event pricing. You pay a small fee when the run starts and then for each result that is actually produced.

  • Run start: $0.001 per run
  • Per result: $0.001 per job record

Example costs:

  • 10 results: $0.01
  • 100 results: $0.1
  • 500 results: $0.5

Example: recurring monitoring savings

These examples compare full re-scrapes with incremental runs at different churn rates. Churn is the share of listings that are new or whose tracked content changed since the previous run. Actual churn depends on your query breadth, source activity, and polling frequency — the scenarios below are examples, not predictions.

Example setup: 200 results per run, daily polling (30 runs/month). Event-pricing examples scale linearly with result count.

Numbers below are for the primary Result event. Other events (Detail Enrichment (addon)) are billed separately when they fire and follow the same incremental logic — when the underlying record has not changed, no charge is emitted.

Churn rateFull re-scrape run costIncremental run costSavings vs full re-scrapeMonthly cost after baseline
5% — stable niche query$0.20$0.01$0.19 (95%)$0.33
15% — moderate broad query$0.20$0.03$0.17 (85%)$0.93
30% — high-volume aggregator$0.20$0.06$0.14 (70%)$1.83

Full re-scrape monthly cost at daily polling: $6.03. First month with incremental costs $0.52 / $1.10 / $1.97 for the 5% / 15% / 30% scenarios because the first run builds baseline state at full cost before incremental savings apply.

FAQ

How many results can I get from upwork.com?

The number of results depends on the search query and available listings on upwork.com. Use the maxResults parameter to control how many results are returned per run.

Does Upwork Scraper support recurring monitoring?

Yes. Enable incremental mode to only receive new or changed listings on subsequent runs. This is ideal for scheduled monitoring where you want to track changes over time without re-processing the full dataset.

Can I integrate Upwork Scraper with other apps?

Yes. Upwork Scraper has built-in notification channels — Telegram, Discord, Slack, WhatsApp (Meta Cloud API), and a generic JSON webhook (for n8n, Make, Zapier, or any custom HTTP backend). Configure one or more channels in the input and notifications fire at the end of every run. Combine with incrementalMode and notifyOnlyChanges to receive alerts only for new or updated listings. It also works with Apify's integrations to push data to Google Sheets, data warehouses, and more.

Can I use Upwork Scraper with the Apify API?

Yes. You can start runs, manage inputs, and retrieve results programmatically through the Apify API. Client libraries are available for JavaScript, Python, and other languages.

Can I use Upwork Scraper through an MCP Server?

Yes. Apify provides an MCP Server that lets AI assistants and agents call this actor directly. Use compact mode and descriptionMaxLength to keep payloads manageable for LLM context windows.

This actor extracts publicly available data from upwork.com. Web scraping of public information is generally considered legal, but you should always review the target site's terms of service and ensure your use case complies with applicable laws and regulations, including GDPR where relevant.

Your feedback

If you have questions, need a feature, or found a bug, please open an issue on the actor's page in Apify Console. Your feedback helps us improve.

You might also like

Getting started with Apify

New to Apify? Create a free account with $5 credit — no credit card required.

  1. Sign up — $5 platform credit included
  2. Open this actor and configure your input
  3. Click Start — export results as JSON, CSV, or Excel

Need more later? See Apify pricing.