RAG Web Browser avatar

RAG Web Browser

Pricing

Pay per usage

Go to Apify Store
RAG Web Browser

RAG Web Browser

Web browser for AI agents and RAG pipelines. Search Google, scrape top pages, return clean Markdown. Faster, lighter, and cheaper than alternatives. Works with Claude, GPT, LangChain, CrewAI via MCP.

Pricing

Pay per usage

Rating

0.0

(0)

Developer

Tugelbay Konabayev

Tugelbay Konabayev

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

1

Monthly active users

5 hours ago

Last modified

Share

RAG Web Browser — Search Google & Extract Web Content for LLMs

Web browser designed for AI agents, RAG pipelines, and LLM applications. Search Google, scrape the top results, and get clean Markdown content — ready to inject into prompts, vector databases, or retrieval pipelines. Similar to ChatGPT's web browsing, but as an API you control.

Give your AI agent the ability to search the web and read any page — in one API call.

Main features

  • Search Google and scrape top N results in a single call
  • Scrape specific URLs directly (bypass search)
  • Auto-detect JavaScript-heavy pages and fall back to headless browser
  • Clean content extraction via Mozilla Readability algorithm
  • Output as Markdown, plain text, or clean HTML
  • Google SERP proxy for reliable, unblocked search results
  • Lightweight: runs on 256MB–1GB (vs 8GB for alternatives)
  • PPE pricing: pay only for pages you actually scrape
  • MCP and OpenAPI compatible — works with Claude, GPT, LangChain, CrewAI
  • Open for inspection — review the source code before using

Why this actor over alternatives?

FeatureThis Actorapify/rag-web-browserManual scraping
Memory usage256MB–1GB8GB (8x more)Varies
Pricing modelPPE (pay per page)$20/month rentalFree but manual
Cold start time~2 seconds~10 secondsN/A
Output qualityReadability + html2textCustom extractorDepends
JavaScript supportAuto-detect + fallbackAlways full browserNeed Playwright
MCP compatibleYes (PPE)Yes (rental)No
Google retry logicYes (3 attempts, backoff)LimitedManual
AI agent friendlyPPE = per-task billingRental = flat feeNo billing

Key advantage: PPE pricing means AI agents pay per page scraped, not a flat monthly fee. This makes the actor MCP-native — AI systems can call it on-demand without committing to a subscription.

How it works

  1. You provide a search query (e.g., "best RAG frameworks 2026") or a URL
  2. If search query: the actor queries Google via SERP proxy and gets top N result URLs
  3. Each URL is fetched using fast HTTP first (raw HTML), then Playwright browser for JS-heavy sites
  4. Content is extracted using Mozilla Readability, cleaned, and converted to Markdown
  5. Results are returned with metadata (title, description, language, URL, HTTP status)
[Search Query][Google SERP][Top N URLs][Fetch HTML][Readability][Markdown]
or
[Direct URL][Fetch HTML][Readability][Markdown]

Usage modes

The RAG Web Browser can be used in two ways: as a standard Actor run or in Standby mode as a persistent HTTP server.

Standard Actor run

Run the Actor via the Apify API, schedule, integrations, or manually in Console. Pass an input JSON object with your search query and settings. Results are stored in the default dataset.

This mode is best for:

  • Testing and evaluation
  • Batch processing (scrape many queries in sequence)
  • Scheduled runs (daily news extraction, content monitoring)
  • One-off research tasks

The Actor supports Standby mode, where it runs a persistent HTTP server that receives requests and responds with extracted web content.

Why Standby mode is better for production:

  • No cold start — the Actor is already running, responses are faster
  • Parallel requests — handles multiple queries simultaneously
  • Cost-efficient — one Actor run serves many requests
  • Simple HTTP API — just send a GET request

To use Standby mode, send an HTTP GET request:

https://<your-actor>.apify.actor/search?token=<APIFY_API_TOKEN>&query=hello+world

Replace <APIFY_API_TOKEN> with your Apify API token. You can also pass the token via the Authorization HTTP header for increased security.

The /search endpoint accepts all input parameters as query strings. Object parameters like proxyConfiguration should be URL-encoded JSON strings.

Input examples

Search Google and get top 3 results as Markdown

{
"query": "retrieval augmented generation best practices",
"maxResults": 3,
"outputFormat": "markdown"
}

Scrape specific URLs directly

{
"urls": [
{ "url": "https://openai.com/index/introducing-chatgpt-search/" },
{ "url": "https://docs.apify.com/platform/actors/publishing" }
],
"outputFormat": "markdown"
}

Search with country filter and browser mode

{
"query": "AI trends 2026",
"maxResults": 5,
"googleCountry": "uk",
"scrapingTool": "browser"
}

Fast extraction (raw HTTP only, no JavaScript)

{
"query": "python web scraping tutorial",
"maxResults": 10,
"scrapingTool": "raw-http",
"outputFormat": "text"
}

Single URL with both Markdown and text output

{
"query": "https://en.wikipedia.org/wiki/Retrieval-augmented_generation",
"outputFormat": "both"
}

Input parameters

ParameterTypeDefaultDescription
queryStringGoogle search query or a direct URL. Examples: "best RAG frameworks", "https://example.com"
urlsArrayList of specific URLs to scrape (alternative to query)
maxResultsInteger3Number of Google search results to scrape (1–20)
outputFormatString"markdown"Output format: "markdown", "text", "html", or "both" (markdown + text)
scrapingToolString"auto"Scraping method: "auto" (recommended), "raw-http" (fastest), "browser" (JavaScript support)
googleCountryString"us"Country for Google results (ISO 3166-1 alpha-2 code)
proxyConfigurationObjectGoogle SERPProxy settings. Default uses Google SERP proxy for search.

Output format

Each item in the dataset contains:

FieldTypeDescription
urlStringFinal page URL (after redirects)
titleStringPage title (from Readability or meta tags)
descriptionStringPage meta description or Open Graph description
languageCodeStringDetected content language (e.g., "en")
markdownStringExtracted content as Markdown (if outputFormat includes markdown)
textStringExtracted content as plain text (if outputFormat includes text)
httpStatusCodeIntegerHTTP response status code
requestStatusString"handled" (success) or "failed"
loadedAtStringISO 8601 timestamp of when the page was loaded
searchTitleStringGoogle search result title (only for search queries)
searchDescriptionStringGoogle search snippet (only for search queries)
searchUrlStringOriginal Google result URL (only for search queries)

Example output (search query)

{
"url": "https://example.com/rag-guide",
"title": "RAG Best Practices - A Complete Guide",
"description": "Learn how to build effective RAG pipelines with up-to-date techniques.",
"languageCode": "en",
"markdown": "# RAG Best Practices\n\nRetrieval Augmented Generation (RAG) combines...\n\n## Key Principles\n\n1. **Chunk wisely** — use semantic chunking...\n2. **Embed efficiently** — match embedding model to query type...",
"text": null,
"httpStatusCode": 200,
"requestStatus": "handled",
"loadedAt": "2026-03-29T12:00:00Z",
"searchTitle": "RAG Best Practices - A Complete Guide",
"searchDescription": "Learn how to build effective RAG pipelines...",
"searchUrl": "https://example.com/rag-guide"
}

Example output (direct URL)

{
"url": "https://openai.com/index/introducing-chatgpt-search/",
"title": "Introducing ChatGPT search | OpenAI",
"description": "Get fast, timely answers with links to relevant web sources",
"languageCode": "en-US",
"markdown": "# Introducing ChatGPT search | OpenAI\n\nGet fast, timely answers with links to relevant web sources.\n\nChatGPT can now search the web in a much better way than before...",
"text": null,
"httpStatusCode": 200,
"requestStatus": "handled",
"loadedAt": "2026-03-29T12:05:00Z",
"searchTitle": null,
"searchDescription": null,
"searchUrl": null
}

Integration with LLMs

RAG Web Browser is designed for easy integration with LLM applications, AI agents, OpenAI Assistants, GPTs, and RAG pipelines via function calling.

OpenAPI schema

Use the OpenAPI schema to integrate with any LLM that supports function calling:

  • OpenAPI 3.1.0 schema — for modern LLM platforms
  • The schema contains all available query parameters, but only query is required

Tip: Remove optional parameters from the schema to reduce token usage and minimize hallucinations in function calling.

Apify MCP Server (Claude, AI agents)

The actor works with AI agents via the Apify MCP Server. Use it as a web browsing tool in Claude Desktop, Claude Code, or any MCP-compatible AI framework.

Step-by-step setup for Claude Desktop:

  1. Install the Apify MCP Server package
  2. Add to your Claude Desktop MCP configuration (claude_desktop_config.json):
{
"mcpServers": {
"apify": {
"command": "npx",
"args": ["-y", "@apify/mcp-server"],
"env": {
"APIFY_TOKEN": "your-apify-api-token"
}
}
}
}
  1. Restart Claude Desktop
  2. Ask Claude: "Search the web for 'best RAG frameworks 2026' and summarize the top results"
  3. Claude will call the RAG Web Browser actor and return summarized content

OpenAI Assistants

OpenAI Assistants don't support web browsing natively. RAG Web Browser adds this capability:

  1. Create an Assistant in the OpenAI Platform
  2. Add a function tool with the RAG Web Browser OpenAPI schema
  3. Configure the function to call the Standby web server URL
  4. Your Assistant can now search the web and read pages

For detailed instructions, see OpenAI Assistants integration in Apify docs.

OpenAI GPTs (Custom Actions)

Add web browsing to your GPTs:

  1. Go to My GPTsCreate a GPT
  2. Under ActionsCreate new action
  3. Set Authentication to API key, Auth Type Bearer
  4. Paste the OpenAPI schema in the Schema field
  5. Save and test — your GPT can now search Google and extract web content

Python integration

from apify_client import ApifyClient
client = ApifyClient("your-apify-api-token")
# Search Google and get top 3 results as Markdown
run = client.actor("tugelbay/rag-web-browser").call(
run_input={
"query": "best RAG frameworks 2026",
"maxResults": 3,
"outputFormat": "markdown",
}
)
# Read results from dataset
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
print(f"## {item['title']}")
print(f"URL: {item['url']}")
print(f"Content: {item['markdown'][:500]}...")
print()

JavaScript/TypeScript integration

import { ApifyClient } from "apify-client";
const client = new ApifyClient({ token: "your-apify-api-token" });
// Search and extract
const run = await client.actor("tugelbay/rag-web-browser").call({
query: "best RAG frameworks 2026",
maxResults: 3,
outputFormat: "markdown",
});
// Read results
const { items } = await client.dataset(run.defaultDatasetId).listItems();
for (const item of items) {
console.log(`## ${item.title}`);
console.log(`URL: ${item.url}`);
console.log(`Content: ${item.markdown?.substring(0, 500)}...`);
}

LangChain integration

from langchain_community.utilities import ApifyWrapper
from langchain_core.documents import Document
apify = ApifyWrapper(apify_api_token="your-apify-api-token")
# Use as a document loader for RAG
docs = apify.call_actor(
actor_id="tugelbay/rag-web-browser",
run_input={
"query": "retrieval augmented generation best practices",
"maxResults": 5,
"outputFormat": "markdown",
},
dataset_mapping_function=lambda item: Document(
page_content=item.get("markdown", ""),
metadata={
"url": item.get("url"),
"title": item.get("title"),
},
),
)
# Feed into your RAG pipeline
for doc in docs:
print(f"Title: {doc.metadata['title']}")
print(f"Content length: {len(doc.page_content)} chars")

cURL (Standby mode)

# Search Google
curl "https://rag-web-browser.apify.actor/search?token=YOUR_TOKEN&query=best+RAG+frameworks&maxResults=3"
# Scrape a specific URL
curl "https://rag-web-browser.apify.actor/search?token=YOUR_TOKEN&query=https://example.com"
# Fast mode (raw HTTP, no JavaScript)
curl "https://rag-web-browser.apify.actor/search?token=YOUR_TOKEN&query=python+tutorial&scrapingTool=raw-http"

Performance optimization

Scraping tool selection

The most critical performance decision is selecting the right scraping method:

MethodSpeedJavaScriptBest for
raw-httpFastest (2–5s)NoStatic sites, blogs, docs, Wikipedia
browserSlower (10–20s)YesSPAs, React/Vue apps, dynamic content
autoAdaptiveAuto-detectMixed workloads (recommended default)

Recommendation: Use raw-http when you know your target sites are static. Use auto when scraping unknown URLs from Google search results.

Response time benchmarks

Typical latency for different configurations (averaged across 3 search queries):

MemoryMax resultsScraping toolAvg latency
256MB1raw-http3–5s
256MB3raw-http5–8s
512MB1auto8–12s
512MB3auto12–18s
1GB5auto15–25s
1GB3browser20–30s

Note: First request after cold start takes 2–5 seconds longer. Use Standby mode to eliminate cold starts in production.

Memory comparison with alternatives

ActorMemory requiredPages in parallelCost per hour
This actor256MB–1GB3–5~$0.03–0.10
apify/rag-web-browser8GB24~$0.80
Generic Playwright scraper2–4GB1–3~$0.20–0.40

This actor uses 8x less memory than the official Apify RAG Web Browser because it uses raw HTTP requests as the primary scraping method and only launches a browser when JavaScript rendering is actually needed.

Tips for reducing latency

  1. Use raw-http scraping tool for static websites — up to 5x faster
  2. Reduce maxResults — fewer pages = faster response
  3. Use Standby mode — eliminates Docker container startup time
  4. Set a timeout — the actor returns partial results if time runs out, so your LLM gets at least some context

Cost vs. throughput optimization

When running in Standby mode, you can tune memory and request limits:

  • Low memory (256MB): Cheapest, handles 1–2 concurrent requests. Good for low-traffic applications.
  • Medium memory (512MB–1GB): Balanced. Handles 3–5 concurrent requests. Recommended for most use cases.
  • High memory (2GB+): Maximum throughput. Only needed for high-traffic applications with 10+ concurrent requests.

Create a Task in Apify Console to override default Standby settings for your specific use case.

Use cases

RAG pipelines — feed vector databases with fresh web content

Search a topic and inject the results into your vector store for retrieval-augmented generation:

# Search and store in ChromaDB
results = rag_web_browser.search("latest AI safety research 2026", max_results=10)
for result in results:
vector_store.add(
documents=[result["markdown"]],
metadatas=[{"url": result["url"], "title": result["title"]}],
)

AI agents — web browsing tool

Give your AI agent the ability to search and read the web. Works with any agent framework that supports function calling or MCP.

Research automation

Search a topic and get structured content from multiple sources. Perfect for literature reviews, competitive analysis, and market research.

Content monitoring

Track changes on specific pages by scraping them on a schedule. Compare Markdown output between runs to detect content changes.

Knowledge base building

Extract and index documentation sites. Combine with a vector database to build a searchable knowledge base from any public website.

Competitive analysis

Scrape competitor pages, extract their content, and analyze messaging, features, and positioning.

News aggregation

Search for breaking news on a topic and get clean article text — no ads, no navigation, just the content.

Cost estimation (PPE pricing)

This actor uses Pay-Per-Event pricing:

EventDescription
page-scrapedEach page successfully scraped and content extracted

Example costs:

ScenarioPagesEventsEstimated cost
Search + top 3 results33~$0.015
Search + top 10 results1010~$0.05
Direct URL scrape (5 pages)55~$0.025
Daily monitoring (20 pages)600/mo600~$3/month
RAG pipeline (100 queries/day)9,000/mo9,000~$45/month

First 100 pages are free to help you evaluate the actor.

Comparison with alternatives:

Solution1,000 pages/month10,000 pages/month
This actor (PPE)~$5~$50
apify/rag-web-browser (Rental)$20 (flat)$20 (flat)
SerpAPI + scraper$50+$200+
Custom infrastructure$30+ (server costs)$100+

When PPE is cheaper: Low-to-medium volume (<4,000 pages/month). For very high volume, rental may be more cost-effective — but PPE has the advantage of being AI/MCP-compatible and scaling down to zero when not in use.

FAQ

How is this different from the official apify/rag-web-browser?

The official Apify RAG Web Browser uses a full Playwright browser for every request, requiring 8GB of memory and costing $20/month as a rental. This actor uses a smart hybrid approach: fast HTTP requests for most pages, with automatic Playwright fallback only for JavaScript-heavy sites. Result: 8x less memory, PPE pricing (pay per page), and faster response times for static content.

Can I use this with Claude / ChatGPT / other AI assistants?

Yes. The actor works with:

  • Claude Desktop via Apify MCP Server
  • OpenAI GPTs via Custom Actions (OpenAPI schema)
  • OpenAI Assistants via function calling
  • LangChain, CrewAI, AutoGen via Apify Python/JS client
  • Any framework that supports HTTP APIs or MCP

Does it handle JavaScript-rendered pages (SPAs)?

Yes. Set scrapingTool to "auto" (default) and the actor will automatically detect pages that need JavaScript rendering and use a headless Chromium browser. Or set scrapingTool to "browser" to always use the browser.

What about anti-scraping protections (Cloudflare, CAPTCHAs)?

The actor uses Apify's proxy infrastructure (SERP proxy for Google, datacenter/residential proxies for target pages) to bypass common protections. However, some sites with aggressive bot detection may still block requests. The actor returns a "failed" status for pages it can't access.

Can I search in languages other than English?

Yes. Set the googleCountry parameter to any ISO 3166-1 alpha-2 country code (e.g., "de" for Germany, "jp" for Japan, "br" for Brazil). Google will return localized results.

What's the maximum number of results per query?

20 results per search query (Google's limit). For more coverage, run multiple queries with different search terms.

How do I use this in a RAG pipeline?

  1. Call the actor with your search query
  2. Get Markdown content from the results
  3. Split the Markdown into chunks (sentence-level or paragraph-level)
  4. Embed chunks with your embedding model (OpenAI, Cohere, etc.)
  5. Store in your vector database (Pinecone, ChromaDB, Weaviate, etc.)
  6. Query the vector store during LLM inference for relevant context

Is the output compatible with OpenAI / Anthropic token limits?

Yes. Markdown output is compact and token-efficient. A typical web page produces 1,000–5,000 tokens of Markdown. You can control output size by adjusting maxResults and using text format (slightly more compact than Markdown).

Can I run this on a schedule?

Yes. Set up a Schedule in Apify Console to run the actor at any interval — daily, hourly, or custom cron expressions.

Troubleshooting

Google search returns 0 results

  • Cause: Google may temporarily rate-limit the SERP proxy IP
  • Fix: The actor retries automatically (3 attempts with backoff). If it still fails, try again in a few minutes.
  • Alternative: Provide direct URLs via the urls input instead of using search.

Page content is empty or very short

  • Cause: The page requires JavaScript to render content (SPA)
  • Fix: Set scrapingTool to "browser" or "auto" to enable Playwright rendering
  • Alternative: Some pages (login walls, paywalled content) simply can't be scraped

Timeout errors

  • Cause: Target page is slow to respond or has heavy JavaScript
  • Fix: Increase the timeout, or reduce maxResults to scrape fewer pages per query
  • Alternative: Use raw-http scraping tool for faster (but JavaScript-less) extraction

Markdown output has formatting issues

  • Cause: Complex page layouts (multi-column, heavy CSS) may not convert cleanly
  • Fix: This is expected for non-article pages. The Readability algorithm works best on article-style content (blogs, news, documentation).
  • Alternative: Use text output format for a simpler, cleaner extraction.

"Failed" status for some pages

  • Cause: Cloudflare protection, login walls, IP blocks, or the page doesn't exist
  • Fix: Try using residential proxy configuration. Some pages simply can't be scraped.

Limitations

  • Google Search may rate-limit; the actor retries automatically (3 attempts with exponential backoff)
  • Some websites block scraping entirely (Cloudflare protection, CAPTCHA, login walls)
  • JavaScript-heavy SPAs may require "browser" scraping mode (slower but more reliable)
  • Maximum 20 search results per query (Google's limit)
  • Content extraction works best on article-style pages; complex layouts (dashboards, apps) may lose formatting
  • Direct URL scraping uses datacenter proxy by default; some sites may require residential proxy

Changelog

v1.0 (2026-03-29)

  • Initial release
  • Google Search + page scraping in one call
  • Auto-detect JS-heavy pages with Playwright fallback
  • Readability + html2text for clean Markdown extraction
  • Google SERP proxy support with 3-attempt retry
  • Dual proxy strategy (SERP for Google, datacenter for target pages)
  • PPE pricing (first 100 pages free)
  • Supports Markdown, plain text, HTML, and combined output formats
  • Concurrent scraping (up to 3 pages in parallel)