Website Content Extractor avatar

Website Content Extractor

Under maintenance
Try for free

1 day trial then $9.00/month - No credit card required now

Go to Store
This Actor is under maintenance.

This Actor may be unreliable while under maintenance. Would you like to try a similar Actor instead?

See alternative Actors
Website Content Extractor

Website Content Extractor

fastidious_drawer/website-content-extractor
Try for free

1 day trial then $9.00/month - No credit card required now

This extractor lets you extract content from any website with a single or multiple URLs. Use selectors to choose specific sections like the body and exclude elements like headers or navigation. It also extracts images and links, providing data in JSON and DataTable formats for easy processing.

Developer
Maintained by Community

Actor Metrics

  • 3 monthly users

  • No reviews yet

  • No bookmarks yet

  • >99% runs succeeded

  • Created in Jan 2025

  • Modified 6 days ago

The Website Content Extractor is a web scraping tool designed to extract text, images, metadata, and links from specified websites using Playwright and Crawlee. It allows users to define target URLs, CSS selectors for content extraction, and exclusion rules.

Features

  • Extract Text: Extracts visible text from the website based on CSS selectors.
  • Extract Metadata: Extracts metadata including canonical URL, title, description, and Open Graph data.
  • Extract Images: Optionally extract all images from the page.
  • Extract Links: Optionally extract all links from the page.
  • Exclude Selectors: Excludes certain page elements (e.g., header, footer, nav) from the extraction.
  • Crawl Multiple Pages: Crawl and extract content from multiple pages if needed.

How to Use

Input

The input is a JSON configuration that specifies the settings for the extraction process.

Fields

  • urls (required): Array of URLs — List of website URLs to extract content from.
  • selectors: Array of CSS selectors — Specifies which elements to extract content from.
  • excludeSelectors: Array of CSS selectors — Specifies elements to exclude from extraction (e.g., header, nav, footer).
  • extractImages: Boolean — If set to true, images from the page will be extracted.
  • extractLinks: Boolean — If set to true, links from the page will be extracted.
  • maxPages: Integer — Limits the number of pages to crawl. Defaults to 1 if not set.

Example Input

1{
2    "urls": [
3        "https://example.com"
4    ],
5    "selectors": [
6        "p",
7        "h1"
8    ],
9    "excludeSelectors": [
10        "header",
11        "footer"
12    ],
13    "extractImages": true,
14    "extractLinks": true,
15    "maxPages": 3
16}

Output

The output consists of extracted data for each URL, including:

  • Text: All text content extracted from the specified selectors.
  • Markdown: Converted Markdown format of the extracted text.
  • Metadata: Metadata such as canonical URL, title, description, and Open Graph data.
  • Images: List of image URLs extracted from the page (if enabled).
  • Links: List of all links found on the page (if enabled).
  • Crawl Information: Includes the URL, loading time, HTTP status code, and crawl depth.

Example Output

1{
2    "url": "https://example.com",
3    "crawl": {
4        "loadedUrl": "https://example.com",
5        "loadedTime": "2025-03-10T10:00:00Z",
6        "depth": 0,
7        "httpStatusCode": 200
8    },
9    "text": "Extracted text content...",
10    "markdown": "**Extracted Text:**\n\nExtracted text content...",
11    "metadata": {
12        "canonicalUrl": "https://example.com/canonical",
13        "title": "Page Title",
14        "description": "Page description here",
15        "openGraph": [
16            { "property": "og:title", "content": "Page Title" },
17            { "property": "og:description", "content": "Page description here" }
18        ],
19        "jsonLd": []
20    },
21    "images": ["https://example.com/image1.jpg", "https://example.com/image2.jpg"],
22    "links": ["https://example.com/page1", "https://example.com/page2"]
23}

Notes

  • CSS Selectors: Use valid CSS selectors to extract the specific content you need from the web pages.
  • Limitations: Depending on the website, some content may be loaded dynamically via JavaScript. In such cases, make sure to enable Playwright's capabilities to handle dynamic content.
  • Crawl Depth: Set maxPages to crawl more pages from the same domain, but be mindful of rate limits and page load times.