
Upwork Post Details Scraper
Pricing
$9.99/month + usage

Upwork Post Details Scraper
This scraper takes direct Upwork job post URLs and extracts information such as job title, budget, description, and client data. It can optionally use a proxy server from a specified country. The results are provided in a structured JSON format for easy analysis or further processing.
0.0 (0)
Pricing
$9.99/month + usage
2
Total users
47
Monthly users
23
Runs succeeded
>99%
Issues response
21 hours
Last modified
a month ago
Upwork Job Post Scraper
This scraper takes individual Upwork job post URLs and extracts detailed information about each post (title, description, budget, etc.). It can optionally use a proxy with a specified country code to request the data.
Input
The input is a JSON object that must contain at least one startUrls entry. It can also include an optional proxyCountryCode:
{"startUrls": [{"url": "https://www.upwork.com/freelance-jobs/apply/WIX-Website-Template-Development-for-Virtual-Staging-SaaS_~021883994784056195740"}],"proxyCountryCode": "FR"}
Input Fields
Field | Type | Required | Description | Default |
---|---|---|---|---|
startUrls | array | yes | An array of objects, each containing a url for a specific Upwork job post. | none |
proxyCountryCode | string | no | A two-letter country code to configure proxy usage. (e.g., US , FR ) or "LOCAL" to disable. | "FR" |
startUrls
: Each URL should be the direct link to an Upwork job post.proxyCountryCode
: Determines where the proxy server is located. If"LOCAL"
, no remote proxy is used.
Output
The scraper outputs an array of job detail objects. Each job object may contain:
- title: Title of the job post.
- paymentType: Indicates if it’s an hourly or fixed-price job.
- paymentAmount: The proposed pay range or budget.
- description: Detailed text describing the job scope and requirements.
- jobPosted: When the job was posted (e.g., “3 days ago”).
- lastViewedByClient: The client’s latest activity timestamp.
- jobType: Usually “Remote Job” or region-limited.
- projectType: Complexity or nature of the project.
- estimatedHours: “Less than 30 hrs/week” or “More than 30 hrs/week.”
- duration: Project duration (e.g., “3-6 months”).
- experienceLevel: The experience level required (“Entry”, “Intermediate”, “Expert”).
- clientLocation: Where the client is located.
- clientTotalSpent: Total amount the client has spent on Upwork.
- clientTotalHires: How many freelancers the client has hired.
- clientActiveHires: How many of those hires are currently active.
- clientHoursBilled: The total hours the client has billed.
- clientMemberSince: The date the client joined Upwork.
- proposals: The range of proposals submitted (e.g., “5 to 10”).
- interviews: How many candidates are currently being interviewed.
- invitesSent: How many invitations the client sent out.
- skills: An array of extracted skills for the job.
- locationRestriction: Any geographical restriction applied to applicants (e.g., “Only freelancers located in the U.S. may apply.”).
- contractToHire: Boolean indicating whether it’s a “contract-to-hire” opportunity.
Example output:
[{"title": "Looking for a Startup Hubspot Sales Ops and CRM Manager","paymentType": "Hourly","paymentAmount": "$65.00 - $110.00","description": "We’re seeking a skilled and proactive freelance ...","jobPosted": "3 days ago","lastViewedByClient": "yesterday","jobType": "Remote Job","projectType": "Complex project","estimatedHours": "Less than 30 hrs/week","duration": "3-6 months","experienceLevel": "Intermediate","clientLocation": "United States","clientTotalSpent": "$154K","clientTotalHires": "30","clientActiveHires": "9","clientHoursBilled": "8,297 hours","clientMemberSince": "Apr 20, 2022","proposals": "20 to 50","interviews": "7","invitesSent": "5","skills": ["HubSpot","Marketing Automation","Sales Operations","Sales"],"locationRestriction": "Only freelancers located in the U.S. may apply.","contractToHire": true}]
Usage
- Provide the Input: When running on Apify, add your Upwork job post URLs in
startUrls
(and optionally modify theproxyCountryCode
). - Run the Scraper: The scraper navigates to each job URL, parses the HTML, and collects the specified data.
- View Results: Once finished, the dataset will contain an array of job objects matching the above schema.