LLM Prompt Response avatar

LLM Prompt Response

Pricing

from $0.01 / 1,000 results

Go to Apify Store
LLM Prompt Response

LLM Prompt Response

Submits prompts to LLM providers (ChatGPT, Perplexity) via Camoufox anti-detect browser and captures structured responses with sources.

Pricing

from $0.01 / 1,000 results

Rating

0.0

(0)

Developer

Jakub Suchy

Jakub Suchy

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

1

Monthly active users

2 days ago

Last modified

Share

Scrape real responses from ChatGPT, Perplexity AI, Google Gemini, and Google AI Mode by submitting prompts through an actual browser. Get the exact same answers a human user would see -- complete with cited sources, links, and full-length responses.

Unlike API-based approaches, this Actor uses Camoufox (anti-detect Firefox) with human-like behavior to interact with LLM chat interfaces directly. This means you get responses identical to what users see in the browser, including web search results, citations, and source links that are not available through official APIs.

What can you use it for?

  • Brand monitoring -- Track how AI chatbots mention your brand vs. competitors in organic responses
  • SEO & AI visibility research -- Discover which websites ChatGPT and Perplexity cite as sources
  • Competitor analysis -- See which brands get recommended for specific queries
  • LLM output benchmarking -- Compare how different AI providers answer the same questions
  • Content gap analysis -- Find out what AI recommends and identify content opportunities
  • Market research -- Understand how AI perceives products, services, and industries
  • Citation tracking -- Monitor which URLs appear in AI-generated answers over time

Supported providers

ProviderAuth requiredSourcesNotes
PerplexityNoYesDefault. No login needed, great for quick research queries
ChatGPTYesYesRequires email + password. Optional TOTP for 2FA accounts
Google GeminiNoYesNo login needed
Google AI ModeNoYesNo login needed. Sources extracted from response text

Input

FieldTypeRequiredDefaultDescription
promptsstring[]Yes--One or more prompts to send. Each prompt produces one dataset item.
providerstringNoperplexityWhich LLM provider to use: chatgpt, perplexity, gemini, or google-aimode

Example input

{
"prompts": [
"What are the best project management tools for remote teams?"
],
"provider": "perplexity"
}

Google AI Mode example

{
"prompts": [
"What are the best project management tools for remote teams?"
],
"provider": "google-aimode"
}

Output

Each prompt produces one item in the dataset with the following structure:

{
"question": "What are the best project management tools for remote teams?",
"answer": "The top project management tools for remote teams include Notion, Linear, Asana, and Jira. Notion is popular for its flexibility combining docs, wikis, and task tracking in one place. Linear is favored by engineering teams for its speed and clean interface. Asana works well for cross-functional teams managing complex workflows, while Jira remains the standard for software development projects with deep issue tracking and sprint planning features.",
"sources": [
{
"href": "https://www.atlassian.com/software/jira",
"title": "Jira | Issue & Project Tracking Software | Atlassian"
},
{
"href": "https://asana.com",
"title": "Asana · Manage your team's work, projects, & tasks online"
},
{
"href": "https://linear.app",
"title": "Linear – The new standard for modern software development"
}
],
"provider": "perplexity",
"url": "https://www.perplexity.ai/search/what-are-the-best-project-mana...",
"timestamp": "2026-04-01T12:00:00.000Z"
}

Output fields

FieldTypeDescription
questionstringThe original prompt that was submitted
answerstringThe full text response from the LLM provider
sourcesarrayList of cited sources with href (URL) and title
providerstringWhich provider generated this response (chatgpt, perplexity, gemini, or google-aimode)
urlstringThe browser URL of the conversation page
timestampstringISO 8601 timestamp of when the response was captured

If a prompt fails, the dataset item will include "answer": null and an "error" field with the failure reason. Successful prompts are not affected by individual failures.

How it works

  1. Launches Camoufox -- a modified Firefox build with anti-fingerprinting and human-like mouse movements, typing delays, and scroll behavior
  2. Navigates to the selected provider (ChatGPT, Perplexity, Google Gemini, or Google AI Mode)
  3. Handles authentication if needed
  4. Automatically dismisses cookie banners, popups, and Cloudflare challenges
  5. Submits each prompt, waits for the full streamed response to complete
  6. Extracts the response text and cited source URLs
  7. Pushes each result to the Apify Dataset

All requests are routed through US residential proxies for maximum reliability and anti-detection.

Performance & resources

  • Each prompt takes approximately 30 seconds to 3 minutes depending on the provider and response length
  • Prompts are processed sequentially (browser automation is single-threaded)
  • Recommended memory: 4 GB minimum
  • No browser profile is persisted between runs -- each run starts with a clean session

API usage

Synchronous (wait for results inline)

Best for short prompt lists (under 5 minutes total runtime):

curl -X POST "https://api.apify.com/v2/acts/jakubsuchy~llm-prompt-response/run-sync-get-dataset-items?token=YOUR_APIFY_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"prompts": ["What are the best enterprise project management systems??"],
"provider": "perplexity"
}'

Returns dataset items directly in the response body.

Asynchronous (start run, poll, fetch)

Best for larger batches or when you need non-blocking behavior:

# 1. Start the run
curl -X POST "https://api.apify.com/v2/acts/jakubsuchy~llm-prompt-response/runs?token=YOUR_APIFY_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"prompts": ["What are the best enterprise project management systems??"],
"provider": "perplexity"
}'
# Response contains: { "data": { "id": "RUN_ID", "status": "RUNNING", ... } }
# 2. Poll until status is SUCCEEDED (or FAILED / TIMED-OUT)
curl "https://api.apify.com/v2/acts/jakubsuchy~llm-prompt-response/runs/RUN_ID?token=YOUR_APIFY_TOKEN"
# 3. Fetch results
curl "https://api.apify.com/v2/acts/jakubsuchy~llm-prompt-response/runs/RUN_ID/dataset/items?token=YOUR_APIFY_TOKEN"

Bash polling loop:

RUN_ID="your-run-id"
TOKEN="your-token"
while true; do
STATUS=$(curl -s "https://api.apify.com/v2/acts/jakubsuchy~llm-prompt-response/runs/$RUN_ID?token=$TOKEN" \
| python3 -c "import sys,json; print(json.load(sys.stdin)['data']['status'])")
echo "Status: $STATUS"
[[ "$STATUS" == "SUCCEEDED" || "$STATUS" == "FAILED" || "$STATUS" == "TIMED-OUT" ]] && break
sleep 5
done
curl -s "https://api.apify.com/v2/acts/jakubsuchy~llm-prompt-response/runs/$RUN_ID/dataset/items?token=$TOKEN"

Integrations

Connect LLM Prompt Response with other tools using Apify's built-in integrations:

  • Webhooks -- Get notified when a run completes
  • API -- Trigger runs programmatically and fetch results via the Apify API
  • Schedules -- Run on a recurring schedule to track AI responses over time
  • Google Sheets -- Export results directly to a spreadsheet
  • Slack / Email -- Send notifications when runs finish