RFP Opportunity Scout
Pricing
from $0.01 / 1,000 results
RFP Opportunity Scout
Find RFPs/tenders from user-provided procurement pages, extract details, and use AI to summarize and qualify matches.
RFP Opportunity Scout
Pricing
from $0.01 / 1,000 results
Find RFPs/tenders from user-provided procurement pages, extract details, and use AI to summarize and qualify matches.
Procurement listing pages to crawl (e.g., a portal 'opportunities' page). Provide multiple sources if needed.
Maximum number of requests that can be made by this crawler.
Maximum link depth from the start URLs. Depth 0 = listing page only. Depth 1 includes detail pages.
If enabled, the crawler only enqueues links from the same hostname as each start URL.
Optional list of substrings. If set, only links whose URL contains one of these values are treated as opportunity detail pages.
[ "rfp", "tender", "solicitation", "opportunity", "procurement"]Keyword pre-filter for detail pages (used before AI). If empty, all detail pages are considered.
[]If any exclude keyword is present, the opportunity is skipped.
[]Minimum AI match score (0-100) for an opportunity to be considered qualified.
Optional delay applied at the start of each request handler to reduce load on target servers.
Use Apify Proxy for IP rotation to prevent blocking.
Enable verbose logging (useful for development).
When enabled, the Actor uses an LLM (Ollama Cloud or local Ollama) to summarize and qualify opportunities.
Which API format to use: 'ollama' (native /api/chat) or 'openai' (OpenAI-compatible /v1/chat/completions).
Base URL for the LLM API. For local Ollama: http://localhost:11434. For Ollama Cloud, set the Cloud base URL here or via LLM_BASE_URL env var.
API key for the LLM provider. Prefer setting LLM_API_KEY (or OLLAMA_API_KEY) as an Actor secret instead of passing it in input.
Model used for structured enrichment. Manage this via LLM_MODEL/OLLAMA_MODEL env vars if you prefer.
Optional full URL override for OpenAI-compatible chat completions (e.g., https://.../v1/chat/completions). Takes precedence over llmBaseUrl when llmApiStyle='openai'.
Maximum number of opportunities that will be sent to the AI model for enrichment.
How many AI enrichment requests can run in parallel.
Maximum characters sent to the AI model per opportunity.
Maximum output tokens requested from the AI model per opportunity.
Sampling temperature for AI enrichment. Lower values are more deterministic.
Enable if monetized with Apify pay-per-event. Disable for local development.
Event to charge for each qualified opportunity.
If enabled, the Actor stops further AI enrichments when the user's max cost limit is reached.