Extended GPT Scraper is a powerful tool that leverages OpenAI's API to modify text obtained from a scraper. You can use the scraper to extract content from a website and then pass that content to the OpenAI API to make the GPT magic happen.
The scraper first loads the page using Playwright, then it converts the content into markdown format and asks for GPT instructions about markdown content.
If the content doesn't fit into the GPT limit, the scraper will truncate the content. You can find the message about truncated content in the log.
There are two costs associated with using GPT Scraper.
You can find the cost of the OpenAI API on the OpenAI pricing page. The cost depends on the model you are using and the length of the content you are sending to the API for scraping.
The cost of the scraper is the same as the cost of Web Scraper, because it uses the same browser under the hood. You can find information about the cost on the pricing page under the Detailed Pricing breakdown section. The cost estimates are based on averages and may vary depending on the complexity of the pages you scrape.
To get started with Extended GPT Scraper, you need to set up the pages you want to scrape using Start URLs and set up instructions for how the scraper should handle each page and the OpenAI API key. NOTE: You can find the OpenAI API key in your OpenAI dashboard.
You can configure the scraper and GTP using Input configuration to set up a more complex workflow.
Extended GPT Scraper accepts a number of configuration settings. These can be entered either manually in the user interface in Apify Console or programmatically in a JSON object using the Apify API. For a complete list of input fields and their types, please see the outline of the Actor's Input-schema.
The Start URLs (
startUrls) field represents the initial list of page URLs that the scraper will visit. You can enter a group of URLs together using file upload or one by one.
The Link selector (
linkSelector) field contains a CSS selector that is used to find links to other web pages (items with
href attributes, e.g.
<div class="my-class" href="...">).
On every page that is loaded, the scraper looks for all links matching Link selector, and checks that the target URL matches one of the Glob patterns. If it is a match, it then adds the URL to the request queue so that it's loaded by the scraper later on.
If Link selector is empty, the page links are ignored, and the scraper only loads pages specified in Start URLs.
The Glob patterns (
globs) field specifies which types of URLs found by Link selector should be added to the request queue.
A glob pattern is simply a string with wildcard characters.
For example, a glob pattern
http://www.example.com/pages/**/* will match all the
The API key for accessing OpenAI. You can get it from OpenAI platform.
This option tells GPT how to handle page content. For example, you can send the following prompts.
- "Summarize this page in three sentences."
- "Find sentences that contain 'Apify Proxy' and return them as a list."
You can also instruct OpenAI to answer with "skip this page" if you don't want to process all the scraped content, e.g.
- "Summarize this page in three sentences. If the page is about proxies, answer with 'skip this page'.".
The GPT Model (
model) option specifies which GPT model to use.
You can find more information about the models on the OpenAI API documentation.
Keep in mind that each model has different pricing and features.
This specifies how many links away from
Start URLs the scraper will descend.
This value is a safeguard against infinite crawling depths for misconfigured scrapers.
The maximum number of pages that the scraper will open. 0 means unlimited.
If you want to get data in a structured format, you can define JSON schema using the
Schema input option and enable the Use JSON schema to format answer option.
This schema will be used to format data into a structured JSON object, which will be stored in the output in the jsonAnswer attribute.
The Proxy configuration (
proxyConfiguration) option enables you to set proxies.
The scraper will use them to prevent its detection by target websites.
You can use both Apify Proxy and custom HTTP or SOCKS5 proxy servers.