Npm Scraper
Pricing
Pay per event
Npm Scraper
Scrape npm packages — names, versions, downloads, dependencies, and metadata. Search the npm registry for package data.
Pricing
Pay per event
Rating
0.0
(0)
Developer

Stas Persiianenko
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
3 hours ago
Last modified
Categories
Share
Scrape npm packages from the npm registry, the world's largest JavaScript package registry. Search by keyword and get package names, versions, descriptions, monthly download counts, quality scores, and repository links.
What does npm Scraper do?
npm Scraper uses the npm registry public API to search for Node.js packages and extract full metadata. It fetches package details including versions, descriptions, keywords, publishers, maintainers, and license information. Optionally enriches results with monthly download counts from the npm downloads API.
Why scrape npm?
npm is the world's largest software registry with over 2 million packages. It's the definitive source for understanding the JavaScript and Node.js ecosystem.
Key reasons to scrape it:
- Ecosystem analysis — Map the JavaScript package landscape for any domain
- Technology research — Find the most popular libraries for specific use cases
- Competitive intelligence — Track download trends for competing packages
- Developer tools — Build dashboards or recommendation engines for developers
- Security research — Monitor packages for quality scores and maintenance status
Use cases
- JavaScript developers finding the best packages for their projects
- Engineering managers evaluating library adoption and maintenance health
- Technical writers researching popular packages for tutorials
- Open-source maintainers tracking competitor package adoption
- Security teams auditing dependency health and quality scores
- Researchers studying open-source package ecosystems
How to scrape npm
- Go to npm Scraper on Apify Store
- Enter one or more search keywords
- Enable or disable download count enrichment
- Set result limits
- Click Start and wait for results
- Download data as JSON, CSV, or Excel
Input parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
searchQueries | string[] | (required) | Keywords to search for |
includeDownloads | boolean | true | Fetch monthly download counts |
maxResultsPerSearch | integer | 50 | Max packages per keyword (max 250) |
Input example
{"searchQueries": ["web framework", "testing library"],"includeDownloads": true,"maxResultsPerSearch": 20}
Output
Each package in the dataset contains:
| Field | Type | Description |
|---|---|---|
name | string | Package name |
version | string | Latest version |
description | string | Package description |
keywords | string[] | Package keywords |
license | string | License type |
publisher | string | Publisher username |
maintainers | string[] | Maintainer usernames |
homepage | string | Homepage URL |
repository | string | Repository URL |
npmUrl | string | npm package page URL |
monthlyDownloads | number | Downloads in the last month |
popularityScore | number | npm popularity score (0-1) |
qualityScore | number | npm quality score (0-1) |
maintenanceScore | number | npm maintenance score (0-1) |
finalScore | number | npm composite score |
scrapedAt | string | ISO timestamp of extraction |
Output example
{"name": "express","version": "5.1.0","description": "Fast, unopinionated, minimalist web framework","keywords": ["express", "framework", "web", "rest", "restful", "router", "app", "api"],"license": "MIT","publisher": "wesleytodd","maintainers": ["wesleytodd", "ljharb"],"homepage": "https://expressjs.com/","repository": "git+https://github.com/expressjs/express.git","npmUrl": "https://www.npmjs.com/package/express","monthlyDownloads": 295069103,"popularityScore": 1,"qualityScore": 1,"maintenanceScore": 1,"finalScore": 465.07,"scrapedAt": "2026-03-03T03:50:00.123Z"}
How much does it cost to scrape npm?
npm Scraper uses pay-per-event pricing:
| Event | Price |
|---|---|
| Run started | $0.001 |
| Package extracted | $0.001 per package |
Cost examples
| Scenario | Packages | Cost |
|---|---|---|
| Quick search | 20 | $0.021 |
| Ecosystem survey | 100 | $0.101 |
| Large analysis | 250 | $0.251 |
Platform costs are negligible — typically under $0.001 per run.
Using npm Scraper with the Apify API
Node.js
import { ApifyClient } from 'apify-client';const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });const run = await client.actor('automation-lab/npm-scraper').call({searchQueries: ['web framework'],includeDownloads: true,maxResultsPerSearch: 20,});const { items } = await client.dataset(run.defaultDatasetId).listItems();console.log(`Found ${items.length} packages`);items.forEach(pkg => {console.log(`${pkg.name} v${pkg.version} (${pkg.monthlyDownloads.toLocaleString()} downloads/month)`);});
Python
from apify_client import ApifyClientclient = ApifyClient('YOUR_API_TOKEN')run = client.actor('automation-lab/npm-scraper').call(run_input={'searchQueries': ['web framework'],'includeDownloads': True,'maxResultsPerSearch': 20,})dataset = client.dataset(run['defaultDatasetId']).list_items().itemsprint(f'Found {len(dataset)} packages')for pkg in dataset:print(f"{pkg['name']} v{pkg['version']} ({pkg['monthlyDownloads']:,} downloads/month)")
Integrations
npm Scraper works with all Apify integrations:
- Scheduled runs — Track package popularity trends over time
- Webhooks — Get notified when a scrape completes
- API — Trigger runs and fetch results programmatically
- Google Sheets — Export package data to a spreadsheet
- Slack — Share trending packages with your team
Connect to Zapier, Make, or Google Sheets for automated workflows.
Tips
- Compare download counts to identify the most adopted solution for a given problem
- Check quality and maintenance scores to evaluate package health before adopting
- Use keywords in the output to discover related packages
- Monitor monthly downloads over time to spot growing or declining packages
- Multiple search queries let you compare ecosystem segments in one run
- Set
includeDownloads: falsefor faster runs when you only need metadata
FAQ
How many packages can I search? Each search returns up to 250 packages. Use multiple search queries to cover different topics.
Does it include dependency information? The search API returns metadata and scores. For full dependency trees, you'd need to query individual package endpoints.
Are scoped packages supported?
Yes — both scoped (e.g. @nestjs/core) and unscoped packages are fully supported, including download counts.
How often are download counts updated? npm download counts are updated daily. Monthly counts cover the last 30 days.
What do the scores mean? npm calculates three scores: popularity (download counts and dependents), quality (tests, docs, stability), and maintenance (freshness, issue responsiveness). The final score combines all three.
Use npm Scraper with Claude AI (MCP)
You can integrate npm Scraper as a tool in Claude AI or any MCP-compatible client. This lets you ask Claude to fetch npm data in natural language.
Setup
CLI:
$claude mcp add npm-scraper -- npx -y @anthropic-ai/apify-mcp-server@latest --actors=automation-lab/npm-scraper
JSON config (Claude Desktop, Cline, etc.):
{"mcpServers": {"npm-scraper": {"command": "npx","args": ["-y", "@anthropic-ai/apify-mcp-server@latest", "--actors=automation-lab/npm-scraper"]}}}
Set your APIFY_TOKEN as an environment variable or pass it via --token.
Example prompts
- "Search npm for React state management libraries"
- "Get download stats and metadata for these npm packages"
- "Compare the most popular Node.js web frameworks on npm"
cURL
curl "https://api.apify.com/v2/acts/automation-lab~npm-scraper/run-sync-get-dataset-items?token=YOUR_API_TOKEN" \-X POST -H "Content-Type: application/json" \-d '{"searchQueries": ["web framework"], "includeDownloads": true, "maxResultsPerSearch": 20}'
Download counts show 0 for some packages. Some very new or private-scope packages may not have download stats available on the npm downloads API. The scraper will return 0 in those cases.
I'm not finding a package I know exists. The npm search API ranks by relevance and may not surface niche packages with generic keywords. Try searching with the exact package name for best results.
Other developer tools on Apify
- PyPI Scraper — scrape Python package data from PyPI
- Crates Scraper — extract Rust crate data from crates.io
- Homebrew Scraper — scrape Homebrew formula data
- Docker Hub Scraper — extract Docker image metadata from Docker Hub
- Pub.dev Scraper — scrape Dart and Flutter package data
- NS Record Checker — check nameserver DNS records for domains

