Upwork Scraper
Pricing
Pay per usage
Upwork Scraper
Scrape Upwork job listings at scale. Extract titles, descriptions, budgets, skills, client info, and more from search results or individual job pages. Supports multiple modes: fast search scraping, detailed modal extraction, and authenticated scraping for enhanced data.
Pricing
Pay per usage
Rating
0.0
(0)
Developer

The Empire Strikes Back
Actor stats
0
Bookmarked
9
Total users
3
Monthly active users
a day ago
Last modified
Categories
Share
Scrape Upwork job listings at scale. Extract titles, descriptions, budgets, skills, client info, and more from search results or individual job pages.
What does Upwork Scraper do?
This Actor scrapes Upwork job listings using real browser automation with Cloudflare bypass. It extracts structured job data and outputs it to the Apify Dataset in JSON format.
Use it to monitor new job postings, analyze market trends, track client activity, or build custom job feeds.
Scraping modes
| Mode | Speed | Data depth | Description |
|---|---|---|---|
| Lite | Fast | Comprehensive | Extracts all job data from search pages. Best for most use cases. |
| Deep | Slow | Maximum | Like Lite, but also opens job modals for extra details (activity metrics, full descriptions). |
| Lite Login | Fast | Enhanced | Logs in first, then runs Lite mode. Unlocks client scores and spending data. |
| Deep Login | Slow | Maximum + Enhanced | Logs in first, then runs Deep mode. Most complete data possible. |
| Post | Per-job | Full detail | Scrapes individual job pages by URL. Most comprehensive for specific jobs. |
| Agency | Medium | Agency profiles | Scrapes Top Rated Plus agency profiles from talent search pages. |
Which mode should I choose?
- Start with Lite -- it's fast and covers most fields. One page returns up to 50 jobs.
- Use Deep when you need activity metrics (proposals, interviewing count, invites) or full descriptions.
- Use Login modes when you need client spending amounts and star ratings.
- Use Post when you have specific job URLs and need maximum detail.
Search filters
When using Lite, Deep, or Login modes, you can filter results using the built-in input fields:
- Search Query -- keywords like "React developer" or "Docker"
- Experience Level -- Entry, Intermediate, Expert (multi-select)
- Hourly Rate -- min/max range (filters for hourly jobs only)
- Fixed Price -- min/max range (filters for fixed-price jobs only)
- Client Hires -- minimum number of past client hires
- Project Duration -- week, month, 1-6 months, ongoing (multi-select)
- Hours per Week -- part-time or full-time (multi-select)
- Contract to Hire -- only jobs that may convert to full-time
Note: Hourly rate and fixed price filters are mutually exclusive. If both are set, hourly rate takes priority.
Output data
Each job is stored as one item in the Dataset. Fields are only included when data is available.
Core fields (all modes)
| Field | Type | Example |
|---|---|---|
title | string | "SAP Integration Suite Specialist" |
url | string | "https://www.upwork.com/jobs/..." |
job_id | string | "1944659549969744966" |
description_text | string | Full job description |
skills | array | ["Python", "Docker", "AWS"] |
publish_time | string | "2025-06-16T15:12:59.652Z" |
experience_level | string | "Expert" |
project_length | string | "1-3 months" |
hours_per_week | string | "30+ hrs/week" |
Budget fields
| Field | Type | Description |
|---|---|---|
budget_hourly_min_usd | number | Min hourly rate (USD) |
budget_hourly_max_usd | number | Max hourly rate (USD) |
budget_total_usd | number | Fixed-price budget (USD) |
Client fields (Deep and Login modes)
| Field | Type | Example |
|---|---|---|
client_country | string | "United States" |
client_city | string | "New York" |
client_score | number | 4.91 |
client_feedback_count | integer | 498 |
client_spent | integer | 270000 (USD) |
client_total_hires | integer | 979 |
client_active_hires | integer | 331 |
client_company_size | string | "Small company (2-9 people)" |
client_member_since | string | "Aug 15, 2017" |
Activity fields (Deep modes)
| Field | Type | Example |
|---|---|---|
proposals | string | "Less than 5" |
interviewing | string | "0" |
invites_sent | integer | 4 |
last_viewed | string | "27 seconds ago" |
payment_verified | string | "Payment method verified" |
Classification fields
| Field | Type | Example |
|---|---|---|
category_name | string | "Market Research & Product Reviews" |
category_group_name | string | "Admin Support" |
total_jobs_with_hires | integer | 464 |
Proxy requirements
Upwork blocks datacenter IPs. This Actor requires residential proxies to bypass Cloudflare. The default configuration uses Apify's residential proxy group, which works out of the box.
Limits and performance
- Each search page returns up to 50 jobs
- Lite mode: ~50 jobs in about 60-90 seconds
- Deep mode: ~50 jobs in about 3-5 minutes (modal interactions add time)
- Default memory: 4096 MB | Default timeout: 3600 seconds
- Output limit is automatically capped at Max Pages x 50
Tips
- Start with 1 page to test your filters before scaling up
- Use the Dataset export to get results as JSON, CSV, or Excel
- For recurring scrapes, save your configuration as a Task and set up a Schedule
- Login modes require valid Upwork credentials -- the Actor handles the full login flow automatically