AI Deep Research Agent — Cited Reports & Market Intelligence avatar

AI Deep Research Agent — Cited Reports & Market Intelligence

Under maintenance

Pricing

from $100.00 / 1,000 research reports

Go to Apify Store
AI Deep Research Agent — Cited Reports & Market Intelligence

AI Deep Research Agent — Cited Reports & Market Intelligence

Under maintenance

Turn any question into a cited research report in seconds. Deep web search + 7-model AI cascade (Gemini, Groq, Cerebras). Market size, key players, action items — structured JSON output.

Pricing

from $100.00 / 1,000 research reports

Rating

0.0

(0)

Developer

XavvyNess

XavvyNess

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

1

Monthly active users

3 days ago

Last modified

Categories

Share

XavvyNess Research Engine

AI-powered research agent. Give it any question or topic — get back a structured, cited research report in seconds. Powered by live web search + smart model routing (Gemini 2.5 Flash / Llama 3.3 70B / Gemini 2.0 Flash).

Demo

🎬 Video demo coming soon. Upload research-engine.mp4 to YouTube, then run python3 scripts/actor-video-gen.py --embed-readmes to embed it here automatically.


What it does

Most AI research tools either hallucinate without web access, or dump raw search results with no synthesis. The XavvyNess Research Engine does both:

  1. Searches the live web via Tavily — real URLs, real content, not stale training data
  2. Synthesizes a structured report using the best model for the depth you choose
  3. Caches results for 24h — re-running the same query is instant and free
  4. Returns structured output — query, summary, full report, sources array, model used, timestamp

Input

FieldTypeDefaultDescription
querystring(required)What to research. Be specific for better results.
depthenumstandardquick (3-5 paragraphs) · standard (full report) · deep (comprehensive, multi-angle)
formatenummarkdownmarkdown · bullet · json
includeSourcebooleantrueInclude [n] citations and source URLs in report
maxResultsinteger10Number of web sources to pull (3–30)

Example input

{
"query": "What are the security risks of using LLMs in production APIs?",
"depth": "deep",
"format": "markdown",
"includeSource": true,
"maxResults": 15
}

Example output

Real output from a live run (truncated for display):

{
"query": "What are the best open source AI coding assistants in 2025?",
"depth": "standard",
"format": "markdown",
"summary": "AI coding assistants have become essential developer tools in 2025. Windsurf, GitHub Copilot, and Cursor lead adoption. Open-source alternatives like Aider and Continue offer strong customization without vendor lock-in. Tabnine and JetBrains AI excel at refactoring and language-specific completions.",
"report": "## Overview\n\nThe use of Artificial Intelligence (AI) in coding has become increasingly popular in recent years...\n\n## Key Findings\n\n1. **Windsurf**, **GitHub Copilot**, and **Cursor** lead in advanced code generation and IDE integration [1, 2, 3].\n2. **GitHub Copilot** integrates natively with VS Code and Neovim [3, 4].\n3. **Aider** and **Continue** are top open-source alternatives with full local model support [5, 6].\n4. **Tabnine** and **JetBrains AI Assistant** excel at refactoring and language-specific completions [7].\n5. **Gemini** is purpose-built for Android app development [1].\n\n## Sources\n[1] https://... [2] https://...",
"sources": [
{ "title": "Windsurf", "url": "https://windsurf.com", "snippet": "AI-first code editor..." },
{ "title": "GitHub Copilot", "url": "https://github.com/features/copilot", "snippet": "AI pair programmer..." },
{ "title": "Cursor", "url": "https://cursor.com", "snippet": "The AI code editor..." },
{ "title": "Aider", "url": "https://github.com/paul-gauthier/aider", "snippet": "AI pair programming in terminal..." }
],
"model": "groq/llama-3.3-70b-versatile",
"agent": "XavvyNess Research Engine",
"runAt": "2026-04-08T22:21:45.123Z"
}

The full report field contains the complete structured markdown report (typically 600–2000 words for standard depth).


Model routing (automatic)

DepthModelWhy
quickLlama 3.3 70B via GroqSub-second inference, free tier, great for fast summaries
standardGemini 2.0 FlashBest balance of speed and quality
deepGemini 2.5 FlashTop reasoning for comprehensive, multi-angle analysis

No configuration needed — routing is automatic based on your depth input.


Use cases

  • Competitive research — "Who are the top 5 competitors to Linear and how do they price?"
  • Technical deep-dives — "What are the differences between RAG and fine-tuning for production LLMs?"
  • Market analysis — "What AI agent startups received Series A funding in Q1 2026?"
  • Due diligence — "What are the known scaling issues with Supabase at 1M+ rows?"
  • Content research — "What are the most cited 2025 studies on remote work and productivity?"
  • Security audits — "What CVEs affect Node.js 20 LTS as of April 2026?"

Caching

Results are cached for 24 hours per unique query + depth + format combination. Re-running the same query within 24h returns the cached result instantly at no additional compute cost.


Cost comparison

ToolPricingWeb searchCachingModel routing
XavvyNess Research EnginePay-per-result✅ Live (Tavily)✅ 24h✅ Automatic
Generic GPT wrapperHigh flat rate❌ Training data only
Basic search scraperPer-page

Integration

Via Apify JavaScript client

import { ApifyClient } from 'apify-client';
const client = new ApifyClient({ token: 'YOUR_APIFY_TOKEN' });
const run = await client.actor('RBobzxRYFVgoX74uu').call({
query: 'What is the state of open-source LLMs in 2026?',
depth: 'standard',
});
const { items } = await client.dataset(run.defaultDatasetId).listItems();
console.log(items[0].report); // full markdown report
console.log(items[0].summary); // 2-3 sentence summary
console.log(items[0].sources); // array of { title, url, snippet }

Via HTTP API

curl -X POST \
"https://api.apify.com/v2/acts/RBobzxRYFVgoX74uu/runs?token=YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"query": "Best vector databases for production in 2026",
"depth": "standard",
"format": "markdown"
}'

Via Make.com / Zapier

Use the Apify module → Run Actor action. Actor ID: RBobzxRYFVgoX74uu. Pass your query in the input JSON, then map {{report}} and {{summary}} from the output to your next step.


Limitations

  • Web search requires the TAVILY_API_KEY environment variable. Without it, the actor falls back to AI knowledge only (clearly flagged in status messages).
  • deep depth runs may take 30–90 seconds for complex topics with many sources.
  • Source quality depends on what Tavily surfaces — very niche topics may return fewer authoritative results.
  • Report is generated in English regardless of query language (multi-language support planned).

About XavvyNess

XavvyNess is an AI agent platform focused on practical, production-ready automation. This actor is part of a suite of research and development tools built for developers and operators who need real answers, not hallucinations.

Questions or feature requests → open an issue or contact us via the Apify Store.