Awesome-List Parser
Pricing
Pay per usage
Awesome-List Parser
Parse any GitHub awesome-* list (awesome-mcp-servers, awesome-rust, awesome-llm, etc.) into structured JSON entries with category, name, URL, and description. Optionally enrich with star count + freshness from GitHub API. One source of truth for any curated directory.
Pricing
Pay per usage
Rating
0.0
(0)
Developer
Roberto Espino
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
12 days ago
Last modified
Categories
Share
One source of truth for any GitHub awesome- list.*
Hundreds of awesome-* lists exist on GitHub — awesome-mcp-servers, awesome-rust, awesome-llm, awesome-react, etc. They're great for humans but a pain for tools: every list is a markdown README that needs custom parsing.
This actor turns any awesome list into structured JSON in one call. Optionally enriches each GitHub-hosted entry with live metadata (stars, last push, language, license, archived state) so you can rank by signal instead of trusting the curator's order.
What it does
- Fetches the README of any awesome-* GitHub repo via the public API
- Parses standard awesome-list markdown (headers as categories, bulleted
[name](url) - descriptionentries) - Optionally enriches each
github.com/...entry with live repo metadata - Returns structured JSON you can immediately consume in your agent, dashboard, or analysis pipeline
When to use it
- You're building an AI agent and need to ingest a curated directory (MCP servers, Rust crates, LLM tools, etc.)
- You're auditing an awesome list for stale or archived entries (run with
enrich: true+ checkpushed_atandarchivedfields) - You're building a meta-directory that aggregates multiple awesome lists
Input
| Field | Type | Default | Notes |
|---|---|---|---|
repo | string | punkpeye/awesome-mcp-servers | Awesome-list repo in owner/name form. |
enrich | boolean | false | Fetch GitHub metadata for each github.com/... entry. Slower but useful for ranking. |
limit | int | 200 | Cap parsed entries (1-1000). Pair with enrich=true to control rate limit usage. |
githubToken | secret | "" | Optional GitHub PAT — required if enriching a list with 60+ github entries (lifts rate limit 60→5,000 req/hr). |
Output
Each dataset record:
{"category": "Databases / SQL","name": "postgres-mcp","url": "https://github.com/example/postgres-mcp","description": "Read-only Postgres MCP server with schema introspection.","github_repo": "example/postgres-mcp","stars": 234,"forks": 12,"pushed_at": "2026-04-15T...","language": "TypeScript","license": "MIT","archived": false}
The github_* fields appear only when enrich=true and the entry's URL points at a GitHub repo.
Example runs
Just parse a list, fast:
{ "repo": "punkpeye/awesome-mcp-servers" }
Audit a list — find dead/archived entries:
{ "repo": "rust-unofficial/awesome-rust", "enrich": true, "limit": 500, "githubToken": "ghp_..." }
Aggregate two lists in one workflow: run this actor twice, merge the datasets in your downstream step.
Limitations
- Only standard awesome-list markdown shape (headers + bullets with link + description). Lists with custom rendering, tables, or HTML-only entries may parse incompletely.
- Code fences are skipped (good — most lists put example commands in fences, not entries).
- Non-GitHub URLs are returned as-is without enrichment (npm, gitlab, custom domains).
- Lexical only. No semantic merge across categories that mean the same thing.
Companion actors
mcp-server-discovery— search GitHub for MCP-tagged repos (no awesome-list dependency).
License
Apache-2.0.