GitHub Trending Scraper
Pricing
Pay per event
GitHub Trending Scraper
Scrape trending repositories from GitHub — stars, forks, language, description, and contributors.
Pricing
Pay per event
Rating
0.0
(0)
Developer

Stas Persiianenko
Actor stats
0
Bookmarked
6
Total users
3
Monthly active users
2 days ago
Last modified
Categories
Share
Scrape trending repositories from GitHub Trending. Get stars, forks, language, description, and contributor info for the hottest open-source projects.
What does GitHub Trending Scraper do?
GitHub Trending Scraper extracts data from GitHub's trending repositories page. It collects repository names, descriptions, star counts, fork counts, programming language, daily/weekly/monthly star gains, and top contributors.
You can filter by programming language (Python, JavaScript, Rust, etc.) and time range (today, this week, this month).
Why scrape GitHub Trending?
GitHub Trending is the go-to source for discovering popular and rising open-source projects. It's updated continuously and reflects what the developer community is actively using and starring.
Key reasons to scrape it:
- Developer tools discovery — Find new libraries and frameworks gaining traction
- Competitive intelligence — Track trending projects in your tech stack
- Investment research — Spot emerging technologies and developer tools
- Newsletter content — Curate weekly trending repos for developer newsletters
- Market research — Understand technology adoption trends
Use cases
- Developers discovering new tools and libraries in their stack
- CTOs and engineering managers tracking technology trends
- Venture capitalists identifying hot open-source projects
- Developer advocates curating content for newsletters and blogs
- Researchers studying open-source software adoption patterns
- Recruiters identifying active open-source contributors
How to scrape GitHub Trending
- Go to GitHub Trending Scraper on Apify Store
- Optionally select a programming language filter
- Choose a time range (today, this week, or this month)
- Click Start and wait for results
- Download data as JSON, CSV, or Excel
Input parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
language | string | "" | Programming language filter (e.g., "python", "javascript") |
since | string | "daily" | Time range: daily, weekly, or monthly |
spokenLanguageCode | string | "" | Spoken language code (e.g., "en", "zh") |
Input example
{"language": "python","since": "weekly"}
Output
Each repository in the dataset contains:
| Field | Type | Description |
|---|---|---|
rank | number | Position on the trending page |
owner | string | Repository owner/organization |
name | string | Repository name |
fullName | string | Full name (owner/name) |
url | string | Repository URL |
description | string | Repository description |
language | string | Primary programming language |
stars | number | Total star count |
forks | number | Total fork count |
starsToday | number | Stars gained in the selected period |
builtBy | array | Top contributors (username + avatar URL) |
scrapedAt | string | ISO timestamp of extraction |
Output example
{"rank": 1,"owner": "ruvnet","name": "wifi-densepose","fullName": "ruvnet/wifi-densepose","url": "https://github.com/ruvnet/wifi-densepose","description": "WiFi DensePose turns commodity WiFi signals into real-time human pose estimation...","language": "Rust","stars": 22361,"forks": 2659,"starsToday": 5096,"builtBy": [{ "username": "ruvnet", "avatar": "https://avatars.githubusercontent.com/u/2934394" }],"scrapedAt": "2026-03-03T02:47:49.561Z"}
How much does it cost to scrape GitHub Trending?
GitHub Trending Scraper uses pay-per-event pricing:
| Event | Price |
|---|---|
| Run started | $0.001 |
| Repo extracted | $0.001 per repo |
Cost examples
| Scenario | Repos | Cost |
|---|---|---|
| Daily trending (all) | ~25 | $0.026 |
| Weekly Python | ~25 | $0.026 |
| Monthly TypeScript | ~25 | $0.026 |
Platform costs are negligible — typically under $0.001 per run.
Using GitHub Trending Scraper with the Apify API
Node.js
import { ApifyClient } from 'apify-client';const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });const run = await client.actor('automation-lab/github-trending-scraper').call({language: 'python',since: 'weekly',});const { items } = await client.dataset(run.defaultDatasetId).listItems();console.log(`Found ${items.length} trending repos`);items.forEach(repo => {console.log(`#${repo.rank} ${repo.fullName} ⭐ ${repo.stars} (+${repo.starsToday})`);});
Python
from apify_client import ApifyClientclient = ApifyClient('YOUR_API_TOKEN')run = client.actor('automation-lab/github-trending-scraper').call(run_input={'language': 'python','since': 'weekly',})dataset = client.dataset(run['defaultDatasetId']).list_items().itemsprint(f'Found {len(dataset)} trending repos')for repo in dataset:print(f"#{repo['rank']} {repo['fullName']} ⭐ {repo['stars']} (+{repo['starsToday']})")
Integrations
GitHub Trending Scraper works with all Apify integrations:
- Scheduled runs — Track trending repos daily, weekly, or monthly
- Webhooks — Get notified when a scrape finishes
- API — Trigger runs and fetch results programmatically
- Google Sheets — Export trending repos directly to a spreadsheet
- Slack — Send daily trending digest to your team channel
Connect to Zapier, Make, or Google Sheets for automated workflows.
Tips
- Run daily on a schedule to build a historical dataset of trending repos
- Filter by language to focus on your tech stack
- Compare starsToday across runs to identify repos with sustained momentum
- Use builtBy data to discover active open-source contributors in a space
- Combine with GitHub API for deeper analysis of trending repos (README content, issue count, etc.)
FAQ
How many repos does it return? GitHub Trending typically shows 25 repositories per page. The exact number varies by time of day and filters applied.
Can I get trending developers instead of repos? Currently this scraper focuses on repositories. GitHub also has a trending developers page that could be supported in a future version.
How often does GitHub update trending? GitHub Trending is updated continuously throughout the day. Running the scraper at different times may yield different results.
Is this affected by rate limiting? The scraper makes a single request per run, so rate limiting is not a concern.
Use GitHub Trending Scraper with Claude AI (MCP)
You can integrate GitHub Trending Scraper as a tool in Claude AI or any MCP-compatible client. This lets you ask Claude to fetch GitHub Trending data in natural language.
Setup
CLI:
$claude mcp add github-trending-scraper -- npx -y @anthropic-ai/apify-mcp-server@latest --actors=automation-lab/github-trending-scraper
JSON config (Claude Desktop, Cline, etc.):
{"mcpServers": {"github-trending-scraper": {"command": "npx","args": ["-y", "@anthropic-ai/apify-mcp-server@latest", "--actors=automation-lab/github-trending-scraper"]}}}
Set your APIFY_TOKEN as an environment variable or pass it via --token.
Example prompts
- "What repos are trending on GitHub today?"
- "Get this week's trending Python repositories"
- "Show me the monthly trending Rust repositories on GitHub"
cURL
curl "https://api.apify.com/v2/acts/automation-lab~github-trending-scraper/run-sync-get-dataset-items?token=YOUR_API_TOKEN" \-X POST -H "Content-Type: application/json" \-d '{"language": "python", "since": "weekly"}'
The scraper returns fewer than 25 repos. GitHub Trending page size varies. Some language/period combinations have fewer trending repos. This is normal GitHub behavior, not a scraper issue.
starsToday seems too high — is it accurate?
The starsToday field reflects stars gained in the selected period (daily, weekly, or monthly), not just today. For weekly/monthly trending, these numbers represent the full period's star gains.
Other developer tools
- GitHub Scraper — Repositories, profiles, trending, and search results from GitHub
- Hacker News Scraper — Stories from Hacker News front page, newest, Ask HN, and more
- Homebrew Scraper — Homebrew formulas and casks with install counts
- Stack Overflow Scraper — Questions, answers, and tags from Stack Overflow
- npm Scraper — Package metadata from the npm registry
- PyPI Scraper — Python package data from PyPI
- Crates Scraper — Rust crate metadata from crates.io
- Hash Generator — Generate MD5, SHA-1, SHA-256, and SHA-512 hashes