Keyword Suggest Multi
Pricing
Pay per usage
Keyword Suggest Multi
Fetches keyword suggestions from Google, Bing, DuckDuckGo, YouTube, Amazon, eBay, Yandex, Baidu, and Naver for a batch of seed keywords in one country.
Pricing
Pay per usage
Rating
0.0
(0)
Developer
Seller Aim
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
3 days ago
Last modified
Categories
Share
9 search engines. 1 API call. Every keyword your audience actually types.
Query the autocomplete/suggest endpoints of Google, Bing, DuckDuckGo, YouTube, Amazon, eBay, Yandex, Baidu, and Naver for a batch of seed keywords, in any target country — and get back a clean dataset plus a ranked, deduplicated summary. Built for SEO researchers, content planners, e-commerce sellers, and marketers who are tired of running nine separate tools.
Why this Actor
- 9 engines in parallel — stop juggling single-engine keyword tools. One input, one output, one set of billing credits.
- AnswerThePublic-style long-tail expansion — optional A–Z / question / preposition / comparison modifiers turn one seed into ~100 related queries across all 9 engines, surfacing thousands of long-tail suggestions per seed.
- Country-native by default — a single
countryinput (US, DE, JP, CN, KR, ...) auto-routes each engine to its local endpoint, language, and marketplace across 25 pre-configured markets. - Analysis-ready output — row-per-suggestion dataset for drill-down, plus a cross-engine-ranked
SUMMARY.jsonwhere suggestions bubble up by consensus across multiple engines. - Built to not break — round-robin budget truncation (no seed gets starved when hitting the request cap), per-source error tracking, JSONP-tolerant parsers for Baidu and eBay, session pool + retry via Crawlee.
- Covered by 71 unit tests — including real-response fixtures for all 9 sources. Not a YOLO.
Quick start
Input:
{"seeds": ["iphone 15", "samsung galaxy s24"],"country": "US","expansionSlices": ["alphabet", "questions"]}
Dataset row (one per suggestion × source):
{"seed": "iphone 15","source": "google","suggestion": "iphone 15 pro max review","expansionSlice": "alphabet","query": "iphone 15 r","rank": 3,"country": "US","language": "en","scrapedAt": "2026-04-17T06:31:19.211Z"}
Key-Value Store SUMMARY (merged + ranked):
{"meta": {"country": "US","seeds": ["iphone 15", "samsung galaxy s24"],"sourcesUsed": ["google","bing","duckduckgo","youtube","amazon","ebay"],"requestsTotal": 882,"requestsSucceeded": 879,"suggestionsRaw": 7104,"suggestionsUniquePerSeed": 2118},"perSeed": {"iphone 15": {"total": 3540,"unique": 1055,"bySource": { "google": 492, "bing": 408, "amazon": 380, "...": "..." },"topSuggestions": [{"suggestion": "iphone 15 pro max","sources": ["amazon","bing","ebay","google","yandex"],"occurrences": 5,"bestRank": 0}]}}}
The topSuggestions ranking sorts by cross-engine consensus first (how many engines surfaced it), then by best rank (best-performing position across engines) — so the top of the list is genuinely "what people search for," not noise from a single engine.
Use cases
- SEO keyword research — discover every query around your topic plus the intent signals from 9 different engines.
- E-commerce product-listing optimization — Amazon & eBay autocomplete reveals how real shoppers phrase their searches.
- Content planning — enable
questionsexpansion and you get the "how / what / why / when / where" queries begging for blog posts. - Competitive research — suggestions that appear across multiple engines are mass-market intent, not a one-engine quirk.
- Ad-copy ideation — real completions become ad-variation seeds.
- Market-entry research — run the same seeds across
US,DE,JP,MXto compare how different audiences search.
Inputs
| Field | Type | Required | Default | Notes |
|---|---|---|---|---|
seeds | string[] (1–50) | yes | — | Unique entries auto-enforced |
country | ISO 3166-1 alpha-2 | yes | — | US, GB, DE, JP, CN, KR, RU, etc. |
sources | string[] | no | all 9 | Subset of the 9 engines |
expansionSlices | string[] | no | [] | See Expansion table below |
maxRequestsPerRun | integer | no | 5000 | Hard cap; round-robin truncation by seed |
maxConcurrency | integer | no | 10 | Parallel request limit |
summaryTopN | integer | no | 200 | Per-seed cap on topSuggestions |
proxyConfiguration | object | no | Apify default | Standard Apify proxy config |
Sources
google, bing, duckduckgo, youtube, amazon, ebay, yandex, baidu, naver.
Expansion slices
Turn each seed into dozens of related queries, AnswerThePublic-style:
| Slice | Pattern | Example (seed = "car") | Count per seed |
|---|---|---|---|
alphabet | {seed} {a-z, 0-9} | car a, car b, ..., car 9 | 36 |
prefixAlphabet | {a-z, 0-9} {seed} | a car, b car, ..., 9 car | 36 |
questions | {wh-word} {seed} | how car, why car, does car | 13 |
prepositions | {seed} {for/with/near/...} | car for, car with, car near | 7 |
comparisons | {seed} {vs/or/versus} | car vs, car or, car versus | 4 |
Enable any subset (e.g. ["alphabet", "questions"]). Full expansion ≈ 97 queries per seed × 9 engines ≈ 873 requests/seed.
Country coverage
25 pre-configured markets. Each entry carries the local language, Google gl/hl, Bing market, DuckDuckGo kl, Amazon TLD, eBay siteId, and Yandex region ID:
| Region | Markets |
|---|---|
| Americas | US CA MX BR |
| Europe | GB DE FR IT ES NL SE PL TR RU |
| Asia | JP KR CN TH VN ID IN |
| Middle East | SA AE |
| Oceania | AU |
| Africa | ZA |
Unknown countries fall back to generic English (with a warning in the log).
Engine availability rules:
- Baidu / Naver / Yandex default off outside their native markets (CN / KR / RU). Forcing them via explicit
sourcesinput is allowed — a warning is logged, and results may be weak. - Amazon is skipped where no marketplace exists (e.g. RU, KR, TH, VN, ID).
- eBay is skipped where no site ID exists.
- Google / Bing / DuckDuckGo / YouTube work globally.
The SUMMARY.meta.sourcesSkipped object in every run records which engines were dropped and why.
Output details
Dataset (row per suggestion)
All 9 fields are always populated (no missing values):
seed, source, suggestion, expansionSlice, query, rank, country, language, scrapedAt
rank is 0-indexed position within that specific engine response. expansionSlice = "seed" marks rows from the original seed (no expansion modifier applied).
SUMMARY.json (Key-Value Store)
{"meta": {"country", "language", "seeds", "sourcesRequested", "sourcesUsed","sourcesSkipped", "sourcesWithErrors","expansionSlices","requestsTotal", "requestsSucceeded", "requestsFailed","suggestionsRaw", "suggestionsUniquePerSeed","startedAt", "finishedAt", "durationMs","truncated", "truncatedFrom", "localeFallback","crawleeStats"},"perSeed": {"<seed>": {"total": <int>, "unique": <int>,"bySource": { "<source>": <count>, ... },"topSuggestions": [{ "suggestion", "sources", "occurrences", "bestRank" },...]}}}
topSuggestions ordering: occurrences desc → bestRank asc → alphabetical (deterministic, so two identical runs produce identical summaries). Capped at summaryTopN per seed.
Limitations (honest notes)
- No search volumes, CPC, or competition scores. This Actor returns suggestion strings only. Combine with a metrics API if you need numbers.
- One country per run. Need multi-country comparison? Schedule multiple runs.
- Expansion modifier tables are English in v1. Works fine cross-language because engines auto-localize the generated queries, but not yet optimal for deep native research in non-English markets.
- Autocomplete is rate-limit tolerant but not infinite. Aggressive expansion × many seeds × all 9 engines × a tight country can trip rate limiters. Tune
maxConcurrencyandmaxRequestsPerRunfor your scale.
Example inputs
Quick SEO scan on a single seed:
{ "seeds": ["standing desk"], "country": "US" }
Full long-tail expansion:
{"seeds": ["standing desk"],"country": "US","expansionSlices": ["alphabet", "questions", "prepositions", "comparisons"]}
Batch e-commerce research across marketplaces (one country at a time):
{"seeds": ["wireless earbuds", "mechanical keyboard", "standing desk"],"country": "DE","sources": ["google", "amazon", "ebay"],"expansionSlices": ["alphabet"]}
Chinese market (auto-enables Baidu):
{ "seeds": ["手机壳"], "country": "CN" }
Local development
npm installnpm test # 71 unit testsnpm run test:integration # one real Google request
See docs/superpowers/specs/2026-04-17-keyword-suggest-multi-design.md for the full design spec, and docs/superpowers/plans/2026-04-17-keyword-suggest-multi-plan.md for the 18-task TDD build plan this Actor was created with.