LLM Prompt Response
Pricing
from $0.01 / 1,000 results
LLM Prompt Response
Submits prompts to LLM providers (ChatGPT, Perplexity) via Camoufox anti-detect browser and captures structured responses with sources.
Pricing
from $0.01 / 1,000 results
Rating
0.0
(0)
Developer
Jakub Suchy
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
2 days ago
Last modified
Share
Scrape real responses from ChatGPT, Perplexity AI, Google Gemini, and Google AI Mode by submitting prompts through an actual browser. Get the exact same answers a human user would see -- complete with cited sources, links, and full-length responses.
Unlike API-based approaches, this Actor uses Camoufox (anti-detect Firefox) with human-like behavior to interact with LLM chat interfaces directly. This means you get responses identical to what users see in the browser, including web search results, citations, and source links that are not available through official APIs.
What can you use it for?
- Brand monitoring -- Track how AI chatbots mention your brand vs. competitors in organic responses
- SEO & AI visibility research -- Discover which websites ChatGPT and Perplexity cite as sources
- Competitor analysis -- See which brands get recommended for specific queries
- LLM output benchmarking -- Compare how different AI providers answer the same questions
- Content gap analysis -- Find out what AI recommends and identify content opportunities
- Market research -- Understand how AI perceives products, services, and industries
- Citation tracking -- Monitor which URLs appear in AI-generated answers over time
Supported providers
| Provider | Auth required | Sources | Notes |
|---|---|---|---|
| Perplexity | No | Yes | Default. No login needed, great for quick research queries |
| ChatGPT | Yes | Yes | Requires email + password. Optional TOTP for 2FA accounts |
| Google Gemini | No | Yes | No login needed |
| Google AI Mode | No | Yes | No login needed. Sources extracted from response text |
Input
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
prompts | string[] | Yes | -- | One or more prompts to send. Each prompt produces one dataset item. |
provider | string | No | perplexity | Which LLM provider to use: chatgpt, perplexity, gemini, or google-aimode |
Example input
{"prompts": ["What are the best project management tools for remote teams?"],"provider": "perplexity"}
Google AI Mode example
{"prompts": ["What are the best project management tools for remote teams?"],"provider": "google-aimode"}
Output
Each prompt produces one item in the dataset with the following structure:
{"question": "What are the best project management tools for remote teams?","answer": "The top project management tools for remote teams include Notion, Linear, Asana, and Jira. Notion is popular for its flexibility combining docs, wikis, and task tracking in one place. Linear is favored by engineering teams for its speed and clean interface. Asana works well for cross-functional teams managing complex workflows, while Jira remains the standard for software development projects with deep issue tracking and sprint planning features.","sources": [{"href": "https://www.atlassian.com/software/jira","title": "Jira | Issue & Project Tracking Software | Atlassian"},{"href": "https://asana.com","title": "Asana · Manage your team's work, projects, & tasks online"},{"href": "https://linear.app","title": "Linear – The new standard for modern software development"}],"provider": "perplexity","url": "https://www.perplexity.ai/search/what-are-the-best-project-mana...","timestamp": "2026-04-01T12:00:00.000Z"}
Output fields
| Field | Type | Description |
|---|---|---|
question | string | The original prompt that was submitted |
answer | string | The full text response from the LLM provider |
sources | array | List of cited sources with href (URL) and title |
provider | string | Which provider generated this response (chatgpt, perplexity, gemini, or google-aimode) |
url | string | The browser URL of the conversation page |
timestamp | string | ISO 8601 timestamp of when the response was captured |
If a prompt fails, the dataset item will include "answer": null and an "error" field with the failure reason. Successful prompts are not affected by individual failures.
How it works
- Launches Camoufox -- a modified Firefox build with anti-fingerprinting and human-like mouse movements, typing delays, and scroll behavior
- Navigates to the selected provider (ChatGPT, Perplexity, Google Gemini, or Google AI Mode)
- Handles authentication if needed
- Automatically dismisses cookie banners, popups, and Cloudflare challenges
- Submits each prompt, waits for the full streamed response to complete
- Extracts the response text and cited source URLs
- Pushes each result to the Apify Dataset
All requests are routed through US residential proxies for maximum reliability and anti-detection.
Performance & resources
- Each prompt takes approximately 30 seconds to 3 minutes depending on the provider and response length
- Prompts are processed sequentially (browser automation is single-threaded)
- Recommended memory: 4 GB minimum
- No browser profile is persisted between runs -- each run starts with a clean session
API usage
Synchronous (wait for results inline)
Best for short prompt lists (under 5 minutes total runtime):
curl -X POST "https://api.apify.com/v2/acts/jakubsuchy~llm-prompt-response/run-sync-get-dataset-items?token=YOUR_APIFY_TOKEN" \-H "Content-Type: application/json" \-d '{"prompts": ["What are the best enterprise project management systems??"],"provider": "perplexity"}'
Returns dataset items directly in the response body.
Asynchronous (start run, poll, fetch)
Best for larger batches or when you need non-blocking behavior:
# 1. Start the runcurl -X POST "https://api.apify.com/v2/acts/jakubsuchy~llm-prompt-response/runs?token=YOUR_APIFY_TOKEN" \-H "Content-Type: application/json" \-d '{"prompts": ["What are the best enterprise project management systems??"],"provider": "perplexity"}'# Response contains: { "data": { "id": "RUN_ID", "status": "RUNNING", ... } }# 2. Poll until status is SUCCEEDED (or FAILED / TIMED-OUT)curl "https://api.apify.com/v2/acts/jakubsuchy~llm-prompt-response/runs/RUN_ID?token=YOUR_APIFY_TOKEN"# 3. Fetch resultscurl "https://api.apify.com/v2/acts/jakubsuchy~llm-prompt-response/runs/RUN_ID/dataset/items?token=YOUR_APIFY_TOKEN"
Bash polling loop:
RUN_ID="your-run-id"TOKEN="your-token"while true; doSTATUS=$(curl -s "https://api.apify.com/v2/acts/jakubsuchy~llm-prompt-response/runs/$RUN_ID?token=$TOKEN" \| python3 -c "import sys,json; print(json.load(sys.stdin)['data']['status'])")echo "Status: $STATUS"[[ "$STATUS" == "SUCCEEDED" || "$STATUS" == "FAILED" || "$STATUS" == "TIMED-OUT" ]] && breaksleep 5donecurl -s "https://api.apify.com/v2/acts/jakubsuchy~llm-prompt-response/runs/$RUN_ID/dataset/items?token=$TOKEN"
Integrations
Connect LLM Prompt Response with other tools using Apify's built-in integrations:
- Webhooks -- Get notified when a run completes
- API -- Trigger runs programmatically and fetch results via the Apify API
- Schedules -- Run on a recurring schedule to track AI responses over time
- Google Sheets -- Export results directly to a spreadsheet
- Slack / Email -- Send notifications when runs finish