AI Deep Research Agent — Cited Reports & Market Intelligence
Pricing
from $100.00 / 1,000 research reports
AI Deep Research Agent — Cited Reports & Market Intelligence
Under maintenanceTurn any question into a cited research report in seconds. Deep web search + 7-model AI cascade (Gemini, Groq, Cerebras). Market size, key players, action items — structured JSON output.
Pricing
from $100.00 / 1,000 research reports
Rating
0.0
(0)
Developer
XavvyNess
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
3 days ago
Last modified
Categories
Share
XavvyNess Research Engine
AI-powered research agent. Give it any question or topic — get back a structured, cited research report in seconds. Powered by live web search + smart model routing (Gemini 2.5 Flash / Llama 3.3 70B / Gemini 2.0 Flash).
Demo
🎬 Video demo coming soon. Upload
research-engine.mp4to YouTube, then runpython3 scripts/actor-video-gen.py --embed-readmesto embed it here automatically.
What it does
Most AI research tools either hallucinate without web access, or dump raw search results with no synthesis. The XavvyNess Research Engine does both:
- Searches the live web via Tavily — real URLs, real content, not stale training data
- Synthesizes a structured report using the best model for the depth you choose
- Caches results for 24h — re-running the same query is instant and free
- Returns structured output — query, summary, full report, sources array, model used, timestamp
Input
| Field | Type | Default | Description |
|---|---|---|---|
query | string | (required) | What to research. Be specific for better results. |
depth | enum | standard | quick (3-5 paragraphs) · standard (full report) · deep (comprehensive, multi-angle) |
format | enum | markdown | markdown · bullet · json |
includeSource | boolean | true | Include [n] citations and source URLs in report |
maxResults | integer | 10 | Number of web sources to pull (3–30) |
Example input
{"query": "What are the security risks of using LLMs in production APIs?","depth": "deep","format": "markdown","includeSource": true,"maxResults": 15}
Example output
Real output from a live run (truncated for display):
{"query": "What are the best open source AI coding assistants in 2025?","depth": "standard","format": "markdown","summary": "AI coding assistants have become essential developer tools in 2025. Windsurf, GitHub Copilot, and Cursor lead adoption. Open-source alternatives like Aider and Continue offer strong customization without vendor lock-in. Tabnine and JetBrains AI excel at refactoring and language-specific completions.","report": "## Overview\n\nThe use of Artificial Intelligence (AI) in coding has become increasingly popular in recent years...\n\n## Key Findings\n\n1. **Windsurf**, **GitHub Copilot**, and **Cursor** lead in advanced code generation and IDE integration [1, 2, 3].\n2. **GitHub Copilot** integrates natively with VS Code and Neovim [3, 4].\n3. **Aider** and **Continue** are top open-source alternatives with full local model support [5, 6].\n4. **Tabnine** and **JetBrains AI Assistant** excel at refactoring and language-specific completions [7].\n5. **Gemini** is purpose-built for Android app development [1].\n\n## Sources\n[1] https://... [2] https://...","sources": [{ "title": "Windsurf", "url": "https://windsurf.com", "snippet": "AI-first code editor..." },{ "title": "GitHub Copilot", "url": "https://github.com/features/copilot", "snippet": "AI pair programmer..." },{ "title": "Cursor", "url": "https://cursor.com", "snippet": "The AI code editor..." },{ "title": "Aider", "url": "https://github.com/paul-gauthier/aider", "snippet": "AI pair programming in terminal..." }],"model": "groq/llama-3.3-70b-versatile","agent": "XavvyNess Research Engine","runAt": "2026-04-08T22:21:45.123Z"}
The full report field contains the complete structured markdown report (typically 600–2000 words for standard depth).
Model routing (automatic)
| Depth | Model | Why |
|---|---|---|
quick | Llama 3.3 70B via Groq | Sub-second inference, free tier, great for fast summaries |
standard | Gemini 2.0 Flash | Best balance of speed and quality |
deep | Gemini 2.5 Flash | Top reasoning for comprehensive, multi-angle analysis |
No configuration needed — routing is automatic based on your depth input.
Use cases
- Competitive research — "Who are the top 5 competitors to Linear and how do they price?"
- Technical deep-dives — "What are the differences between RAG and fine-tuning for production LLMs?"
- Market analysis — "What AI agent startups received Series A funding in Q1 2026?"
- Due diligence — "What are the known scaling issues with Supabase at 1M+ rows?"
- Content research — "What are the most cited 2025 studies on remote work and productivity?"
- Security audits — "What CVEs affect Node.js 20 LTS as of April 2026?"
Caching
Results are cached for 24 hours per unique query + depth + format combination. Re-running the same query within 24h returns the cached result instantly at no additional compute cost.
Cost comparison
| Tool | Pricing | Web search | Caching | Model routing |
|---|---|---|---|---|
| XavvyNess Research Engine | Pay-per-result | ✅ Live (Tavily) | ✅ 24h | ✅ Automatic |
| Generic GPT wrapper | High flat rate | ❌ Training data only | ❌ | ❌ |
| Basic search scraper | Per-page | ✅ | ❌ | ❌ |
Integration
Via Apify JavaScript client
import { ApifyClient } from 'apify-client';const client = new ApifyClient({ token: 'YOUR_APIFY_TOKEN' });const run = await client.actor('RBobzxRYFVgoX74uu').call({query: 'What is the state of open-source LLMs in 2026?',depth: 'standard',});const { items } = await client.dataset(run.defaultDatasetId).listItems();console.log(items[0].report); // full markdown reportconsole.log(items[0].summary); // 2-3 sentence summaryconsole.log(items[0].sources); // array of { title, url, snippet }
Via HTTP API
curl -X POST \"https://api.apify.com/v2/acts/RBobzxRYFVgoX74uu/runs?token=YOUR_TOKEN" \-H "Content-Type: application/json" \-d '{"query": "Best vector databases for production in 2026","depth": "standard","format": "markdown"}'
Via Make.com / Zapier
Use the Apify module → Run Actor action. Actor ID: RBobzxRYFVgoX74uu. Pass your query in the input JSON, then map {{report}} and {{summary}} from the output to your next step.
Limitations
- Web search requires the
TAVILY_API_KEYenvironment variable. Without it, the actor falls back to AI knowledge only (clearly flagged in status messages). deepdepth runs may take 30–90 seconds for complex topics with many sources.- Source quality depends on what Tavily surfaces — very niche topics may return fewer authoritative results.
- Report is generated in English regardless of query language (multi-language support planned).
About XavvyNess
XavvyNess is an AI agent platform focused on practical, production-ready automation. This actor is part of a suite of research and development tools built for developers and operators who need real answers, not hallucinations.
Questions or feature requests → open an issue or contact us via the Apify Store.

