GPT Scraper

  • drobnikj/gpt-scraper
  • Modified
  • Users 2.1k
  • Runs 123.3k
  • Created by Author's avatarJakub Drobn铆k

Extract data from any website and feed it into GPT via the OpenAI API. Use ChatGPT to proofread content, analyze sentiment, summarize reviews, extract contact details, and much more.

GPT Scraper

To run the code examples, you need to have an Apify account. Replace <YOUR_API_TOKEN> in the code with your API token. For a more detailed explanation, please read about running Actors via the API in Apify Docs.

from apify_client import ApifyClient

# Initialize the ApifyClient with your API token
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "startUrls": [{ "url": "https://news.ycombinator.com/" }],
    "globs": [],
    "linkSelector": "a[href]",
    "instructions": """Get from the page the post with the most points and returns it as JSON in format:
postTitle
postUrl
pointsCount""",
    "targetSelector": "",
    "schema": {
        "type": "object",
        "properties": {
            "title": {
                "type": "string",
                "description": "Page title",
            },
            "description": {
                "type": "string",
                "description": "Page description",
            },
        },
        "required": [
            "title",
            "description",
        ],
    },
    "proxyConfiguration": { "useApifyProxy": True },
}

# Run the Actor and wait for it to finish
run = client.actor("drobnikj/gpt-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)