N8N Template Scraper
Pricing
Pay per event
N8N Template Scraper
Scrape every public n8n workflow and extract metadata, categories, node usage, and full import-ready JSON files. Outputs cleaned descriptions, timestamps, slugs, and simplified node lists. Perfect for automation development, workflow libraries, analytics, and AI-driven analysis.
Pricing
Pay per event
Rating
5.0
(2)
Developer

Gavin Campbell
Actor stats
5
Bookmarked
29
Total users
13
Monthly active users
5 days ago
Last modified
Categories
Share
n8n Template Scraper – Workflow JSON, Nodes & Metadata
Fast, lightweight scraper for the public n8n.io workflow template library.
This Actor talks directly to the official api.n8n.io templates API to fetch:
- Workflow metadata (name, description, categories, views, author, timestamps)
- A normalised summary of node types used in the workflow
- A clean array of nodes with simplified parameters
- The complete raw workflow JSON
- An importable
.jsonfile for each template (saved to Key-Value Store)
Use it to build your own n8n template library, analyse node usage across templates, or feed workflows into your own AI/automation tools.
🚀 Key Features
-
Scrape all templates or a subset
- Toggle “Scrape All Workflows” to crawl the entire n8n template library.
- Or pass a list of specific workflow IDs to fetch only what you need.
-
Full workflow JSON export
- Each template’s importable JSON is saved as a separate file in the run’s Key-Value Store.
- Filenames use a stable slug:
{{workflow_slug}}.json(e.g.build-your-first-ai-agent.json).
-
Rich, normalised dataset
- For every workflow the dataset includes:
- IDs, slugs, URLs
- Description & categories
- Author info
- Views & timestamps
- Node summary (counts by type/family)
- A clean list of nodes (with human-friendly
pretty_type) - The full raw workflow JSON as a string field
- For every workflow the dataset includes:
-
Node type intelligence
- Automatically normalises node types like:
n8n-nodes-base.httpRequestTool→ HTTP Request Tool@n8n/n8n-nodes-langchain.lmChatGoogleGemini→ LM Chat – Google Gemini
- Classifies nodes into families:
core,langchain,community,llmTool,llmModel,ui.
- Automatically normalises node types like:
-
Efficient & robust
- Uses
BasicCrawler+got(no browsers), so it’s fast and compute-efficient. - Request queue is pre-filled from template IDs, then fetched in parallel respecting your
maxConcurrency.
- Uses
📂 Where is my data?
This Actor writes data to:
1. Dataset (structured table)
Location: Run → Dataset
Each item in the dataset corresponds to one n8n template and includes (fields abbreviated for clarity):
workflow_id– numeric ID (e.g.6270)workflow_name– template titleworkflow_slug– URL-friendly slugdescription– markdown-stripped descriptioncategories– JSON string of category names
(e.g.["Personal Productivity","AI Chatbot"])complexity_level–beginner | simple | intermediate | advanced | complexcreated_at,updated_at– ISO timestampstotal_views,recent_views
Author object
author.nameauthor.usernameauthor.verified(boolean)author.social_links– JSON string of links
URLs & file key
template_url– public n8n page, e.g.
https://n8n.io/workflows/6270-build-your-first-ai-agentapi_url– internal template API endpointfile_key– filename of the JSON in key-value store (e.g.build-your-first-ai-agent.json)
Node summary (for analytics)
node_summary.total_nodesnode_summary.core_nodesnode_summary.langchain_nodesnode_summary.community_nodesnode_summary.llm_model_nodesnode_summary.llm_tool_nodesnode_summary.unique_node_types– JSON string of raw type IDsnode_summary.pretty_node_types– JSON string of human-friendly node names
(e.g.["Sticky Note","RSS Feed Read Tool","HTTP Request Tool","LangChain Agent","LangChain Chat Trigger","Memory Buffer Window","LM Chat – Google Gemini"])
Nodes array (per workflow)
nodes– JSON string of an array like:
[{"id": "3808de8d-ef18-47f5-9621-b08ba961ae01","name": "Introduction Note","type": "n8n-nodes-base.stickyNote","pretty_type": "Sticky Note","family": "ui","position": [-752, -256],"parameters": {"content": "## Try It Out! ..."}}]
workflow_json_raw– JSON string of the full raw workflow JSON as returned by the n8n API (nodes, connections, settings, meta, etc.)
You can download the dataset as CSV, JSON, or Excel from the Dataset tab.
2. Key-Value Store (importable JSON files)
For each template, an importable workflow JSON file is stored in the run’s Key-Value Store.
- Key:
file_keyfrom the dataset
(e.g.build-your-first-ai-agent.json) - Value: object with:
{"name": "Build Your First AI Agent","nodes": [...],"connections": {...},"settings": {},"versionId": ""}
You can download these files and import them directly into your own n8n instance.
🔧 Input Parameters
These appear in the Input tab as toggles/fields.
| Field | Type | Description |
|---|---|---|
scrapeAllWorkflows | Boolean | If true, the Actor attempts to discover and scrape all available workflows from the template API. |
maxItems | Number | Approximate maximum number of workflows to scrape when not scraping all workflows. Default: 100. |
idList | Array | Optional list of specific workflow IDs to fetch (e.g. [6270, 3521, 1200]). Only these IDs are scraped. |
maxConcurrency | Number | Maximum parallel HTTP requests for fetching individual templates. Default: 5. |
Tip
- Use ID list mode when you know exactly which templates you want.
- Use Auto-discovery mode (with
scrapeAllWorkflowsormaxItems) to crawl the template index.
▶️ Example Inputs
- Scrape the first 100 workflows (auto-discovery)
{"maxItems": 100}
- Scrape all public templates
{"scrapeAllWorkflows": true}
- Scrape only specific workflow IDs
{"idList": [6270, 3521, 1200],"maxConcurrency": 10}
🤖 API & Automation
You can trigger this Actor programmatically using Apify’s REST API and plug it into:
- n8n (meta!)
- Make.com / Zapier
- Custom back-end scripts or cron jobs
Typical use cases:
- Run weekly to collect new AI/LLM-related templates.
- Mirror n8n’s template library into your own internal catalogue.
- Feed workflow structures into AI agents or documentation generators.
- Analyse which nodes and tools are most popular over time.
Check the API tab on the Actor’s Apify page for copy-paste examples in:
- Node.js
- Python
- Curl
- PHP
- Browser
fetch
⚡ Performance & Cost
- Uses BasicCrawler + got – no headless browsers.
- Very low compute usage even for hundreds of workflows.
- Concurrency is configurable via
maxConcurrency.
🧑💻 Development Notes
Built with:
Run locally:
npm installapify run
📞 Customisation & Support
If you’d like to:
- Add GitHub repository scraping for related assets
- Enrich templates with extra metadata
- Push results into your own database or CRM
- Build bespoke scrapers or automation workflows
…feel free to contact the author via the Apify profile.
They are available for custom automation, n8n integration, and data-extraction projects.
title: "n8n Template Scraper – Workflow JSON, Nodes & Metadata" slug: "n8n-template-scraper-workflow-json-nodes-metadata" description: "Fast, lightweight scraper for the public n8n.io workflow template library on Apify." tags:
- n8n
- web-scraping
- apify
- automation
- workflows
- templates date: "2025-11-26"