Bloomberg News Link Extractor avatar
Bloomberg News Link Extractor

Pricing

$5.00/month + usage

Go to Store
Bloomberg News Link Extractor

Bloomberg News Link Extractor

Developed by

DaDao DB

DaDao DB

Maintained by Community

Extracts news article links from a specified Bloomberg section URL (e.g., Markets, Economics, Technology, Politics...).

5.0 (2)

Pricing

$5.00/month + usage

2

Total users

8

Monthly users

8

Runs succeeded

>99%

Last modified

22 days ago

Bloomberg News Link Extractor is a tool that extracts all article links from a given Bloomberg section page (e.g., https://www.bloomberg.com/economics). Users provide the section URL as input, and the tool returns all news article links found on that page.

Features

  • Accepts any Bloomberg section URL as input
  • Extracts all unique article links from the provided page for yesterday, today, and tomorrow (based on article date in URL)
  • Uses proxy configuration (RESIDENTIAL, US) for reliable and anonymous scraping
  • Uses random user agents for each request
  • Outputs results in structured JSON format
  • Designed for use as an Apify Actor

Input

The input should be a JSON object with the following field:

  • start_urls (array, required): An array of objects, each containing a url field with the Bloomberg section page to scrape.
    Example:
    {
    "start_urls": [
    { "url": "https://www.bloomberg.com/economics" }
    ]
    }

Output

The actor outputs a dataset containing the following structure:

{
"result": [
{
"links": [
"/news/articles/2025-05-12/example-article-1",
"/news/articles/2025-05-12/example-article-2"
],
"url": "https://www.bloomberg.com/economics"
}
]
}
  • links: Array of unique article URLs (relative paths) for yesterday, today, and tomorrow.
  • url: The original section URL that was scraped.

Usage

  1. Run the actor on the Apify platform.
  2. Wait for the extraction to complete.
  3. Access the extracted data in the "Dataset" tab.

Use Cases

  • Aggregating the latest news articles from specific Bloomberg sections
  • Building news monitoring or alerting systems
  • Collecting article URLs for further content scraping or analysis