Extended GPT Scraper

  • drobnikj/extended-gpt-scraper
  • Modified
  • Users 405
  • Runs 31.4k
  • Created by Author's avatarJakub Drobn铆k

Extract data from any website and feed it into GPT via the OpenAI API. Use ChatGPT to proofread content, analyze sentiment, summarize reviews, extract contact details, and much more.

To run the code examples, you need to have an Apify account. Replace <YOUR_API_TOKEN> in the code with your API token. For a more detailed explanation, please read about running Actors via the API in Apify Docs.

from apify_client import ApifyClient

# Initialize the ApifyClient with your API token
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "startUrls": [{ "url": "https://news.ycombinator.com/" }],
    "globs": [],
    "linkSelector": "a[href]",
    "instructions": """Gets the post with the most points from the page and returns it as JSON in this format: 
postTitle
postUrl
pointsCount""",
    "model": "gpt-3.5-turbo",
    "targetSelector": "",
    "schema": {
        "type": "object",
        "properties": {
            "title": {
                "type": "string",
                "description": "Page title",
            },
            "description": {
                "type": "string",
                "description": "Page description",
            },
        },
        "required": [
            "title",
            "description",
        ],
    },
    "proxyConfiguration": { "useApifyProxy": True },
}

# Run the Actor and wait for it to finish
run = client.actor("drobnikj/extended-gpt-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)