Npm Scraper
Pricing
Pay per event
Npm Scraper
Scrape npm packages — names, versions, downloads, dependencies, and metadata. Search the npm registry for package data.
Pricing
Pay per event
Rating
0.0
(0)
Developer
Stas Persiianenko
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
12 hours ago
Last modified
Categories
Share
Scrape npm packages from the npm registry, the world's largest JavaScript package registry. Search by keyword and get package names, versions, descriptions, monthly download counts, quality scores, and repository links.
What does npm Scraper do?
npm Scraper uses the npm registry public API to search for Node.js packages and extract full metadata. It fetches package details including versions, descriptions, keywords, publishers, maintainers, and license information. Optionally enriches results with monthly download counts from the npm downloads API.
Who is it for?
- 💻 JavaScript developers — discovering and comparing packages for their projects
- 📊 Technology analysts — tracking npm ecosystem trends and package adoption rates
- 🏢 Engineering managers — evaluating library health, maintenance status, and popularity
- 🔒 Security teams — auditing dependency quality scores and version histories
- 📝 Technical writers — researching popular packages for tutorials and comparison guides
Why scrape npm?
npm is the world's largest software registry with over 2 million packages. It's the definitive source for understanding the JavaScript and Node.js ecosystem.
Key reasons to scrape it:
- Ecosystem analysis — Map the JavaScript package landscape for any domain
- Technology research — Find the most popular libraries for specific use cases
- Competitive intelligence — Track download trends for competing packages
- Developer tools — Build dashboards or recommendation engines for developers
- Security research — Monitor packages for quality scores and maintenance status
Use cases
- JavaScript developers finding the best packages for their projects
- Engineering managers evaluating library adoption and maintenance health
- Technical writers researching popular packages for tutorials
- Open-source maintainers tracking competitor package adoption
- Security teams auditing dependency health and quality scores
- Researchers studying open-source package ecosystems
How to scrape npm
- Go to npm Scraper on Apify Store
- Enter one or more search keywords
- Enable or disable download count enrichment
- Set result limits
- Click Start and wait for results
- Download data as JSON, CSV, or Excel
Input parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
searchQueries | string[] | (required) | Keywords to search for |
includeDownloads | boolean | true | Fetch monthly download counts |
maxResultsPerSearch | integer | 50 | Max packages per keyword (max 250) |
Input example
{"searchQueries": ["web framework", "testing library"],"includeDownloads": true,"maxResultsPerSearch": 20}
Output
Each package in the dataset contains:
| Field | Type | Description |
|---|---|---|
name | string | Package name |
version | string | Latest version |
description | string | Package description |
keywords | string[] | Package keywords |
license | string | License type |
publisher | string | Publisher username |
maintainers | string[] | Maintainer usernames |
homepage | string | Homepage URL |
repository | string | Repository URL |
npmUrl | string | npm package page URL |
monthlyDownloads | number | Downloads in the last month |
popularityScore | number | npm popularity score (0-1) |
qualityScore | number | npm quality score (0-1) |
maintenanceScore | number | npm maintenance score (0-1) |
finalScore | number | npm composite score |
scrapedAt | string | ISO timestamp of extraction |
Output example
{"name": "express","version": "5.1.0","description": "Fast, unopinionated, minimalist web framework","keywords": ["express", "framework", "web", "rest", "restful", "router", "app", "api"],"license": "MIT","publisher": "wesleytodd","maintainers": ["wesleytodd", "ljharb"],"homepage": "https://expressjs.com/","repository": "git+https://github.com/expressjs/express.git","npmUrl": "https://www.npmjs.com/package/express","monthlyDownloads": 295069103,"popularityScore": 1,"qualityScore": 1,"maintenanceScore": 1,"finalScore": 465.07,"scrapedAt": "2026-03-03T03:50:00.123Z"}
How much does it cost to scrape npm?
npm Scraper uses pay-per-event pricing:
| Event | Price |
|---|---|
| Run started | $0.001 |
| Package extracted | $0.001 per package |
Cost examples
| Scenario | Packages | Cost |
|---|---|---|
| Quick search | 20 | $0.021 |
| Ecosystem survey | 100 | $0.101 |
| Large analysis | 250 | $0.251 |
Platform costs are negligible — typically under $0.001 per run.
Using npm Scraper with the Apify API
Node.js
import { ApifyClient } from 'apify-client';const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });const run = await client.actor('automation-lab/npm-scraper').call({searchQueries: ['web framework'],includeDownloads: true,maxResultsPerSearch: 20,});const { items } = await client.dataset(run.defaultDatasetId).listItems();console.log(`Found ${items.length} packages`);items.forEach(pkg => {console.log(`${pkg.name} v${pkg.version} (${pkg.monthlyDownloads.toLocaleString()} downloads/month)`);});
Python
from apify_client import ApifyClientclient = ApifyClient('YOUR_API_TOKEN')run = client.actor('automation-lab/npm-scraper').call(run_input={'searchQueries': ['web framework'],'includeDownloads': True,'maxResultsPerSearch': 20,})dataset = client.dataset(run['defaultDatasetId']).list_items().itemsprint(f'Found {len(dataset)} packages')for pkg in dataset:print(f"{pkg['name']} v{pkg['version']} ({pkg['monthlyDownloads']:,} downloads/month)")
Use with Claude AI (MCP)
This actor is available as a tool in Claude AI through the Model Context Protocol (MCP). Add it to Claude Desktop, Cursor, Windsurf, or any MCP-compatible client.
Setup for Claude Code
$claude mcp add --transport http apify "https://mcp.apify.com"
Setup for Claude Desktop, Cursor, or VS Code
Add this to your MCP config file:
{"mcpServers": {"apify": {"url": "https://mcp.apify.com"}}}
Example prompts
- "Search npm for React state management libraries and compare their download counts."
- "What are the most popular Node.js HTTP client packages on npm right now?"
- "Get metadata and quality scores for express, fastify, and koa so I can decide which to use."
Learn more in the Apify MCP documentation.
Integrations
npm Scraper works with all Apify integrations:
- Scheduled runs — Track package popularity trends over time
- Webhooks — Get notified when a scrape completes
- API — Trigger runs and fetch results programmatically
- Google Sheets — Export package data to a spreadsheet
- Slack — Share trending packages with your team
Connect to Zapier, Make, or Google Sheets for automated workflows.
Tips
- Compare download counts to identify the most adopted solution for a given problem
- Check quality and maintenance scores to evaluate package health before adopting
- Use keywords in the output to discover related packages
- Monitor monthly downloads over time to spot growing or declining packages
- Multiple search queries let you compare ecosystem segments in one run
- Set
includeDownloads: falsefor faster runs when you only need metadata
Legality
Scraping publicly available data is generally legal according to the US Court of Appeals ruling (HiQ Labs v. LinkedIn). This actor only accesses publicly available information and does not require authentication. Always review and comply with the target website's Terms of Service before scraping. For personal data, ensure compliance with GDPR, CCPA, and other applicable privacy regulations.
FAQ
How many packages can I search? Each search returns up to 250 packages. Use multiple search queries to cover different topics.
Does it include dependency information? The search API returns metadata and scores. For full dependency trees, you'd need to query individual package endpoints.
Are scoped packages supported?
Yes — both scoped (e.g. @nestjs/core) and unscoped packages are fully supported, including download counts.
How often are download counts updated? npm download counts are updated daily. Monthly counts cover the last 30 days.
What do the scores mean? npm calculates three scores: popularity (download counts and dependents), quality (tests, docs, stability), and maintenance (freshness, issue responsiveness). The final score combines all three.
Download counts show 0 for some packages. Some very new or private-scope packages may not have download stats available on the npm downloads API. The scraper will return 0 in those cases.
I'm not finding a package I know exists. The npm search API ranks by relevance and may not surface niche packages with generic keywords. Try searching with the exact package name for best results.
Other developer tools on Apify
- PyPI Scraper — scrape Python package data from PyPI
- Crates Scraper — extract Rust crate data from crates.io
- Homebrew Scraper — scrape Homebrew formula data
- Docker Hub Scraper — extract Docker image metadata from Docker Hub
- Pub.dev Scraper — scrape Dart and Flutter package data
- NS Record Checker — check nameserver DNS records for domains