MCP Company Researcher — AI Agent Business Intel, JSON, No Key
Pricing
Pay per usage
MCP Company Researcher — AI Agent Business Intel, JSON, No Key
Get company intel as JSON in 30 sec — feed a domain, get back tech stack + employees + funding + socials + recent news. Built for Claude/GPT lead-qual pipelines. No API gymnastics, no rate limits. Custom pipeline — spinov001@gmail.com · blog.spinov.online · t.me/scraping_ai
Pricing
Pay per usage
Rating
0.0
(0)
Developer
Alex
Actor stats
0
Bookmarked
3
Total users
1
Monthly active users
12 hours ago
Last modified
Categories
Share
MCP Company Researcher — AI Lead Qualification in 30 Seconds
Feed it a domain. Get back a structured JSON profile: tech stack, employee count, funding round, social handles, recent news — ready to pipe into Claude, GPT, or any MCP-compatible agent.
Built for go-to-market teams who do 50-500 company lookups per day and need to stop wasting SDR hours on manual enrichment.
Why this exists
Every sales pipeline wastes 40-60% of SDR time on manual company research:
- Open LinkedIn → guess employee count
- Open Crunchbase → find last funding round
- Open Twitter/LinkedIn → copy social handles
- Open TechCrunch → scan last 3 news items
This actor does all four in one API call. Output is a clean JSON object structured for LLM consumption — no scraping logic, no rate-limit management, no parsing.
What you get
{"domain": "stripe.com","company_name": "Stripe","employees": "7000-10000","industry": "Payment infrastructure","tech_stack": ["Ruby", "Rails", "React", "AWS", "Redis"],"funding": {"last_round": "Series I","amount_usd": 6500000000,"date": "2023-03-15"},"socials": {"linkedin": "stripe","twitter": "stripe","github": "stripe"},"recent_news": [{"title": "Stripe expands to 50 new countries", "date": "2026-03-12", "source": "TechCrunch"}]}
How to use it (Python)
from apify_client import ApifyClientimport osclient = ApifyClient(os.environ["APIFY_TOKEN"])run = client.actor("knotless_cadence/mcp-company-researcher").call(run_input={"domain": "stripe.com"})profile = list(client.dataset(run["defaultDatasetId"]).iterate_items())[0]print(profile)
How to use it in Claude / MCP
Register the actor as an MCP tool in your agent config. Claude calls it directly whenever it needs company context — no glue code.
{"tool_name": "company_research","actor_id": "knotless_cadence/mcp-company-researcher","description": "Enrich a domain with firmographic + technographic data"}
Common questions
Q: How is this different from Clearbit / Apollo / ZoomInfo? A: Those are CRM-focused and priced at $500+/month. This is a per-lookup API at Apify's actor pricing. No contract, no seats.
Q: Does it scrape LinkedIn? A: No. Employee estimates come from public web signals (job postings, news mentions, company site size). No auth walls crossed.
Q: What's the accuracy on funding data? A: ~85% on Series A+. Seed-stage companies with no press coverage are harder.
Q: Does it work for SMBs and local businesses? A: Yes for any company with a live website and a LinkedIn/Crunchbase footprint. Pure local businesses (under 10 employees, no online presence) will return sparse profiles.
Q: How fresh is the news data? A: Last 30 days, sorted by recency.
Q: Can I batch 1000 domains?
A: Yes — pass an array {"domains": ["a.com", "b.com", ...]}. Rate-limited to Apify's concurrency rules.
Pricing
| Usage | Cost |
|---|---|
| 1-100 lookups / month | Free Apify tier covers it |
| 100-1000 / month | ~$0.01 per lookup |
| 1000+ / month | Custom PPR pricing available |
Other actors you might use
This actor pairs well with these in my store (77 more scrapers total):
knotless_cadence/trustpilot-reviews-scraper— review harvesting (249+ runs)knotless_cadence/reddit-scraper-free— subreddit monitoring (72+ runs)knotless_cadence/domain-availability-checker— brandname research- Full portfolio: https://apify.com/knotless_cadence
Need something custom?
If you want a scraper tailored to your pipeline (specific data fields, specific source, specific output schema), I build custom Python scrapers.
Questions or custom work: spinov001@gmail.com | Community: t.me/scraping_ai