Recruitment Agency Scraper
Pricing
Pay per usage
Recruitment Agency Scraper
Recruitment Agency Scraper. Extract structured data with automatic pagination, proxy rotation, and JSON/CSV export. Pay only for results.
Pricing
Pay per usage
Rating
0.0
(0)
Developer

Donny Nguyen
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
9 hours ago
Last modified
Categories
Share
United Nations News Scraper
What does this actor do?
United Nations News Scraper is an Apify actor that scrape UN News articles on global affairs, climate, human rights, and sustainable development. It runs on the Apify platform and delivers structured data in JSON, CSV, or Excel formats that you can easily integrate into your workflows. For each item found, the actor extracts key data fields including title, summary, url, date, and more. All results are stored in an Apify dataset that you can download or connect to via the Apify API.
Why use this actor?
Manually collecting this data would be extremely time-consuming and error-prone. United Nations News Scraper automates the entire process, saving you hours of manual work. This actor is ideal for data analysts, researchers, marketers, and developers who need reliable, structured data. You can schedule regular runs to keep your data fresh, integrate results directly into spreadsheets or databases, and scale your data collection without any coding required. The actor handles pagination, rate limiting, and data normalization automatically.
How does it work?
This actor uses the Cheerio HTTP scraping library to efficiently parse HTML pages from the target website. It sends lightweight HTTP requests without rendering JavaScript, making it fast and resource-efficient. The actor processes search results, follows pagination, and extracts structured data from each page using CSS selectors.
Input parameters
| Parameter | Type | Description | Default |
|---|---|---|---|
| topic | string | News topic to scrape | "climate change" |
| maxResults | integer | Maximum articles to scrape | 25 |
Output fields
Each item in the output dataset contains the following fields:
| Field | Description | Format |
|---|---|---|
| title | Title | text |
| summary | Summary | text |
| url | Url | link |
| date | Date | text |
| category | Category | text |
| imageUrl | Image Url | link |
Example output:
{"title": "Sample Title","summary": "Sample Summary","url": "https://example.com/item/123","date": "Sample Date","category": "Sample Category","imageUrl": "https://example.com/item/123"}
Cost and performance
This actor runs with a default memory allocation of 1024 MB. Using lightweight HTTP requests, each run typically costs around $0.10-0.25 in Apify platform credits per 1,000 results. A typical run processing 100 results completes in 1-3 minutes. You can reduce costs by limiting the number of results with the maxResults parameter and by scheduling runs during off-peak hours.
Tips and best practices
- Start with a small number of results to test your configuration before scaling up.
- Use the Apify scheduling feature to automate regular data collection runs.
- Export results in the format that best fits your workflow: JSON for APIs, CSV for spreadsheets, or Excel for reports.
- Connect this actor with other actors on the Apify platform for more comprehensive data pipelines.
Related actors you might find useful: