GitHub Documentation Extractor (Agentic)
Pricing
Pay per usage
GitHub Documentation Extractor (Agentic)
An agentic AI actor that automatically extracts and analyzes documentation from GitHub repositories to help developers understand projects faster.
Pricing
Pay per usage
Rating
0.0
(0)
Developer

Himanshi Rana
Actor stats
0
Bookmarked
1
Total users
1
Monthly active users
3 days ago
Last modified
Categories
Share
π€ GitHub Documentation Intelligence
AI-powered documentation extraction and analysis for GitHub repositories
Extract, structure, and analyze documentation from any GitHub repository in seconds. Perfect for building RAG systems, onboarding developers, and auditing documentation quality.
π― What It Does
Automatically extracts and structures:
- β README files - Main project documentation
- β Documentation folders - All markdown files from docs/, documentation/, etc.
- β Code documentation - Docstrings from Python, JavaScript, TypeScript files
- β Metadata - Repository info, stars, language, topics
- β Statistics - Word counts, file counts, documentation coverage
π Quick Start
Input Example
{"url": "https://github.com/pallets/flask","maxFiles": 20,"extractCodeDocs": true}
Output Example
{"status": "success","metadata": {"name": "flask","description": "The Python micro framework","language": "Python","stars": 65000,"url": "https://github.com/pallets/flask"},"readme": {"filename": "README.md","content": "...","sections": [...],"word_count": 450},"documentation_files": [...],"code_documentation": [...],"combined_markdown": "...","statistics": {"has_readme": true,"documentation_files_count": 23,"code_files_with_docs": 15,"total_words": 12500,"total_docstrings": 87}}
β Key Features
π Comprehensive Extraction
- Extracts README, docs folders, and code docstrings
- Supports Python, JavaScript, TypeScript
- Handles nested documentation structures
- Preserves markdown formatting and sections
π― Structured Output
- Clean JSON format ready for processing
- Pydantic models for type safety
- Combined markdown for easy reading
- Detailed statistics and metadata
π‘οΈ Robust & Reliable
- Proper error handling
- Rate limit management
- Partial success handling
- Detailed logging
β‘ Fast & Efficient
- Async operations
- Smart file filtering
- Configurable limits
- Optimized API usage
π‘ Use Cases
π€ RAG Systems
Extract clean documentation for training AI models:
# Use extracted docs for embeddingsdocs = result['combined_markdown']chunks = create_embeddings(docs)
π¨βπ» Developer Onboarding
Generate comprehensive repo overviews:
- Understand project structure
- Find key documentation
- Identify important files
π Documentation Audits
Analyze documentation quality:
- Check completeness
- Identify gaps
- Track improvements
π Code Search
Enable semantic search over codebases:
- Search through docstrings
- Find relevant code examples
- Understand APIs
π§ Configuration
GitHub Token (Recommended)
For private repos and higher rate limits (5,000 vs 60 requests/hour):
- Go to https://github.com/settings/tokens
- Generate new token (classic)
- Select scopes:
repoorpublic_repo - Add to input:
"githubToken": "ghp_your_token"
Options
| Option | Type | Default | Description |
|---|---|---|---|
maxFiles | integer | 100 | Maximum files to process |
extractCodeDocs | boolean | true | Extract code docstrings |
π Statistics Provided
- has_readme: Whether README exists
- documentation_files_count: Number of doc files found
- code_files_with_docs: Number of code files with docstrings
- total_words: Total documentation words
- total_lines: Total documentation lines
- total_docstrings: Total docstrings extracted
π οΈ Development
Local Testing
# Install dependenciespip install -r requirements.txt# Run locallyapify run
Project Structure
.βββ src/β βββ main.py # Actor entry pointβ βββ extractor.py # Extraction logicβ βββ models.py # Data modelsβ βββ utils.py # Helper functionsβββ .actor/β βββ actor.json # Actor configurationβ βββ input_schema.json # Input schemaβββ requirements.txt # Dependenciesβββ Dockerfile # Container config
π€ Contributing
Issues and pull requests welcome! This is an active project participating in the Apify $1M Challenge.
π License
Apache 2.0
π¬ Support
- Questions? Join Apify Discord
- Issues? Open a GitHub issue
- Need help? Check Apify documentation
π― Coming Soon
- π Documentation quality scoring (A-F grades)
- π MCP server for AI agents
- π Change detection and tracking
- π Multi-repo comparison
- π PDF documentation support
- π Website documentation scraping
##FAQs Q: Why did extraction fail? A: Common reasons: 1.Repository doesn't exist (check URL) 2.Repository is private (add GitHub token) 3.Rate limit exceeded (add token for 5000/hour) 4.Repository is too large (reduce maxFiles)
Q: What if I hit rate limits? A:
Without token: 60 requests/hour With token: 5,000 requests/hour Get token: https://github.com/settings/tokens
Q: Can I extract from private repos? A: Yes! Add your GitHub token in the input: json{ "source": { "url": "...", "githubToken": "ghp_your_token" } } Q: What's the maximum repository size? A:
1.Max 500 files per run 2.Max 5MB per file 3.Max 50MB total data 4.Adjust maxFiles if needed
Q: Why are some files skipped? A: Files are skipped if they: 1.Are too large (>5MB) 2.Can't be decoded (binary files) 3.Cause encoding errors
Q: How long does extraction take? A: 1.Small repos (<100 files): 2-5 seconds 2.Medium repos (100-500 files): 10-30 seconds 3.Large repos (500+ files): 30-60 seconds 4.Max timeout: 4 minutes
Built with β€οΈ for the Apify $1M Challenge
β If you find this useful, please star the Actor!