GPT Scraper avatar
GPT Scraper
Try for free

Pay $9.00 for 1,000 pages

View all Actors
GPT Scraper

GPT Scraper

Try for free

Pay $9.00 for 1,000 pages

Extract data from any website and feed it into GPT via the OpenAI API. Use ChatGPT to proofread content, analyze sentiment, summarize reviews, extract contact details, and much more.

Start URLs


A static list of URLs to scrape.

For details, see Start URLs in README.

Instructions for GPT


Instruct GPT how to generate text. For example: "Summarize this page in three sentences."

You can instruct OpenAI to answer with "skip this page", which will skip the page. For example: "Summarize this page in three sentences. If the page is about Apify Proxy, answer with 'skip this page'.".

Include URLs (globs)


Glob patterns matching URLs of pages that will be included in crawling. Combine them with the link selector to tell the scraper where to find links. You need to use both globs and link selector to crawl further pages.

Default value of this property is []

Exclude URLs (globs)


Glob patterns matching URLs of pages that will be excluded from crawling. Note that this affects only links found on pages, but not Start URLs, which are always crawled.

Default value of this property is []

Max crawling depth


This specifies how many links away from the Start URLs the scraper will descend. This value is a safeguard against infinite crawling depths for misconfigured scrapers.

If set to 0, there is no limit.

Default value of this property is 99999999

Max pages per run


Maximum number of pages that the scraper will open. 0 means unlimited.

Default value of this property is 10

Link selector


This is a CSS selector that says which links on the page (<a> elements with href attribute) should be followed and added to the request queue. To filter the links added to the queue, use the Pseudo-URLs setting.

If Link selector is empty, the page links are ignored.

For details, see Link selector in README.

Initial cookies


Cookies that will be pre-set to all pages the scraper opens. This is useful for pages that require login. The value is expected to be a JSON array of objects with name, value, 'domain' and 'path' properties. For example: [{"name": "cookieName", "value": "cookieValue"}, "domain": "", "path": "/"}].

You can use the EditThisCookie browser extension to copy browser cookies in this format, and paste it here.

Default value of this property is []

Proxy configuration


This specifies the proxy servers that will be used by the scraper in order to hide its origin.

For details, see Proxy configuration in README.

Default value of this property is {"useApifyProxy":false}



Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive. For consistent results, we recommend setting the temperature to 0.

Default value of this property is "0"



Controls diversity via nucleus sampling: 0.5 means half of all likelihood-weighted options are considered.

Default value of this property is "1"

Frequency penalty


How much to penalize new tokens based on their existing frequency in the text so far. Decreases the model's likelihood to repeat the same line verbatim.

Default value of this property is "0"

Presence penalty


How much to penalize new tokens based on whether they appear in the text so far. Increases the model's likelihood to talk about new topics.

Default value of this property is "0"

Content selector


A CSS selector of the HTML element on the page that will be used in the instruction. Instead of a whole page, you can use only part of the page. For example: "div#content".

Remove HTML elements (CSS selector)


A CSS selector matching HTML elements that will be removed from the DOM, before sending it to GPT processing. This is useful to skip irrelevant page content and save on GPT input tokens.

By default, the Actor removes usually unwanted elements like scripts, styles and inline images. You can disable the removal by setting this value to some non-existent CSS selector like dummy_keep_everything.

Default value of this property is "script, style, noscript, path, svg, xlink"

Page format in request


In what format to send the content extracted from the page to the GPT. Markdown will take less space allowing for larger requests, while HTML may help include some information like attributes that may otherwise be omitted.

Value options:

"HTML": string"Markdown": string

Default value of this property is "Markdown"

Wait for dynamic content (seconds)


The maximum time to wait for dynamic page content to load. The crawler will continue either if this time elapses, or if it detects the network became idle as there are no more requests for additional resources.

Default value of this property is 0

Remove link URLs


Removes web link URLs while keeping the text content they display.

  • This helps reduce the total page content by eliminating unnecessary URLs before sending to GPT
  • Useful if you are hitting maximum input tokens limits

Default value of this property is false

Use JSON schema to format answer


If true, the answer will be transformed into a structured format based on the schema in the jsonAnswer attribute.

JSON schema format


Defines how the output will be stored in structured format using the JSON Schema. Keep in mind that it uses function, so by setting the description of the fields and the correct title, you can get better results.

Schema description


Description of the schema function. Use this to provide more context for the schema.

By default, the instructions field's value is used as the schema description, you can change it here.

Save debug snapshots


For each page store its HTML, screenshot and parsed content (markdown/HTML as it was sent to ChatGPT) adding links to these into the output

Default value of this property is true

Maintained by Apify
Actor metrics
  • 332 monthly users
  • 98.3% runs succeeded
  • 9.8 days response time
  • Created in Mar 2023
  • Modified 5 days ago