Keyword Suggest Multi avatar

Keyword Suggest Multi

Pricing

Pay per usage

Go to Apify Store
Keyword Suggest Multi

Keyword Suggest Multi

Fetches keyword suggestions from Google, Bing, DuckDuckGo, YouTube, Amazon, eBay, Yandex, Baidu, and Naver for a batch of seed keywords in one country.

Pricing

Pay per usage

Rating

0.0

(0)

Developer

Seller Aim

Seller Aim

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

1

Monthly active users

3 days ago

Last modified

Share

9 search engines. 1 API call. Every keyword your audience actually types.

Query the autocomplete/suggest endpoints of Google, Bing, DuckDuckGo, YouTube, Amazon, eBay, Yandex, Baidu, and Naver for a batch of seed keywords, in any target country — and get back a clean dataset plus a ranked, deduplicated summary. Built for SEO researchers, content planners, e-commerce sellers, and marketers who are tired of running nine separate tools.


Why this Actor

  • 9 engines in parallel — stop juggling single-engine keyword tools. One input, one output, one set of billing credits.
  • AnswerThePublic-style long-tail expansion — optional A–Z / question / preposition / comparison modifiers turn one seed into ~100 related queries across all 9 engines, surfacing thousands of long-tail suggestions per seed.
  • Country-native by default — a single country input (US, DE, JP, CN, KR, ...) auto-routes each engine to its local endpoint, language, and marketplace across 25 pre-configured markets.
  • Analysis-ready output — row-per-suggestion dataset for drill-down, plus a cross-engine-ranked SUMMARY.json where suggestions bubble up by consensus across multiple engines.
  • Built to not break — round-robin budget truncation (no seed gets starved when hitting the request cap), per-source error tracking, JSONP-tolerant parsers for Baidu and eBay, session pool + retry via Crawlee.
  • Covered by 71 unit tests — including real-response fixtures for all 9 sources. Not a YOLO.

Quick start

Input:

{
"seeds": ["iphone 15", "samsung galaxy s24"],
"country": "US",
"expansionSlices": ["alphabet", "questions"]
}

Dataset row (one per suggestion × source):

{
"seed": "iphone 15",
"source": "google",
"suggestion": "iphone 15 pro max review",
"expansionSlice": "alphabet",
"query": "iphone 15 r",
"rank": 3,
"country": "US",
"language": "en",
"scrapedAt": "2026-04-17T06:31:19.211Z"
}

Key-Value Store SUMMARY (merged + ranked):

{
"meta": {
"country": "US",
"seeds": ["iphone 15", "samsung galaxy s24"],
"sourcesUsed": ["google","bing","duckduckgo","youtube","amazon","ebay"],
"requestsTotal": 882,
"requestsSucceeded": 879,
"suggestionsRaw": 7104,
"suggestionsUniquePerSeed": 2118
},
"perSeed": {
"iphone 15": {
"total": 3540,
"unique": 1055,
"bySource": { "google": 492, "bing": 408, "amazon": 380, "...": "..." },
"topSuggestions": [
{
"suggestion": "iphone 15 pro max",
"sources": ["amazon","bing","ebay","google","yandex"],
"occurrences": 5,
"bestRank": 0
}
]
}
}
}

The topSuggestions ranking sorts by cross-engine consensus first (how many engines surfaced it), then by best rank (best-performing position across engines) — so the top of the list is genuinely "what people search for," not noise from a single engine.


Use cases

  • SEO keyword research — discover every query around your topic plus the intent signals from 9 different engines.
  • E-commerce product-listing optimization — Amazon & eBay autocomplete reveals how real shoppers phrase their searches.
  • Content planning — enable questions expansion and you get the "how / what / why / when / where" queries begging for blog posts.
  • Competitive research — suggestions that appear across multiple engines are mass-market intent, not a one-engine quirk.
  • Ad-copy ideation — real completions become ad-variation seeds.
  • Market-entry research — run the same seeds across US, DE, JP, MX to compare how different audiences search.

Inputs

FieldTypeRequiredDefaultNotes
seedsstring[] (1–50)yesUnique entries auto-enforced
countryISO 3166-1 alpha-2yesUS, GB, DE, JP, CN, KR, RU, etc.
sourcesstring[]noall 9Subset of the 9 engines
expansionSlicesstring[]no[]See Expansion table below
maxRequestsPerRunintegerno5000Hard cap; round-robin truncation by seed
maxConcurrencyintegerno10Parallel request limit
summaryTopNintegerno200Per-seed cap on topSuggestions
proxyConfigurationobjectnoApify defaultStandard Apify proxy config

Sources

google, bing, duckduckgo, youtube, amazon, ebay, yandex, baidu, naver.

Expansion slices

Turn each seed into dozens of related queries, AnswerThePublic-style:

SlicePatternExample (seed = "car")Count per seed
alphabet{seed} {a-z, 0-9}car a, car b, ..., car 936
prefixAlphabet{a-z, 0-9} {seed}a car, b car, ..., 9 car36
questions{wh-word} {seed}how car, why car, does car13
prepositions{seed} {for/with/near/...}car for, car with, car near7
comparisons{seed} {vs/or/versus}car vs, car or, car versus4

Enable any subset (e.g. ["alphabet", "questions"]). Full expansion ≈ 97 queries per seed × 9 engines ≈ 873 requests/seed.


Country coverage

25 pre-configured markets. Each entry carries the local language, Google gl/hl, Bing market, DuckDuckGo kl, Amazon TLD, eBay siteId, and Yandex region ID:

RegionMarkets
AmericasUS CA MX BR
EuropeGB DE FR IT ES NL SE PL TR RU
AsiaJP KR CN TH VN ID IN
Middle EastSA AE
OceaniaAU
AfricaZA

Unknown countries fall back to generic English (with a warning in the log).

Engine availability rules:

  • Baidu / Naver / Yandex default off outside their native markets (CN / KR / RU). Forcing them via explicit sources input is allowed — a warning is logged, and results may be weak.
  • Amazon is skipped where no marketplace exists (e.g. RU, KR, TH, VN, ID).
  • eBay is skipped where no site ID exists.
  • Google / Bing / DuckDuckGo / YouTube work globally.

The SUMMARY.meta.sourcesSkipped object in every run records which engines were dropped and why.


Output details

Dataset (row per suggestion)

All 9 fields are always populated (no missing values):

seed, source, suggestion, expansionSlice, query, rank, country, language, scrapedAt

rank is 0-indexed position within that specific engine response. expansionSlice = "seed" marks rows from the original seed (no expansion modifier applied).

SUMMARY.json (Key-Value Store)

{
"meta": {
"country", "language", "seeds", "sourcesRequested", "sourcesUsed",
"sourcesSkipped", "sourcesWithErrors",
"expansionSlices",
"requestsTotal", "requestsSucceeded", "requestsFailed",
"suggestionsRaw", "suggestionsUniquePerSeed",
"startedAt", "finishedAt", "durationMs",
"truncated", "truncatedFrom", "localeFallback",
"crawleeStats"
},
"perSeed": {
"<seed>": {
"total": <int>, "unique": <int>,
"bySource": { "<source>": <count>, ... },
"topSuggestions": [
{ "suggestion", "sources", "occurrences", "bestRank" },
...
]
}
}
}

topSuggestions ordering: occurrences desc → bestRank asc → alphabetical (deterministic, so two identical runs produce identical summaries). Capped at summaryTopN per seed.


Limitations (honest notes)

  • No search volumes, CPC, or competition scores. This Actor returns suggestion strings only. Combine with a metrics API if you need numbers.
  • One country per run. Need multi-country comparison? Schedule multiple runs.
  • Expansion modifier tables are English in v1. Works fine cross-language because engines auto-localize the generated queries, but not yet optimal for deep native research in non-English markets.
  • Autocomplete is rate-limit tolerant but not infinite. Aggressive expansion × many seeds × all 9 engines × a tight country can trip rate limiters. Tune maxConcurrency and maxRequestsPerRun for your scale.

Example inputs

Quick SEO scan on a single seed:

{ "seeds": ["standing desk"], "country": "US" }

Full long-tail expansion:

{
"seeds": ["standing desk"],
"country": "US",
"expansionSlices": ["alphabet", "questions", "prepositions", "comparisons"]
}

Batch e-commerce research across marketplaces (one country at a time):

{
"seeds": ["wireless earbuds", "mechanical keyboard", "standing desk"],
"country": "DE",
"sources": ["google", "amazon", "ebay"],
"expansionSlices": ["alphabet"]
}

Chinese market (auto-enables Baidu):

{ "seeds": ["手机壳"], "country": "CN" }

Local development

npm install
npm test # 71 unit tests
npm run test:integration # one real Google request

See docs/superpowers/specs/2026-04-17-keyword-suggest-multi-design.md for the full design spec, and docs/superpowers/plans/2026-04-17-keyword-suggest-multi-plan.md for the 18-task TDD build plan this Actor was created with.