GitHub Documentation Extractor (Agentic) avatar
GitHub Documentation Extractor (Agentic)

Pricing

Pay per usage

Go to Apify Store
GitHub Documentation Extractor (Agentic)

GitHub Documentation Extractor (Agentic)

An agentic AI actor that automatically extracts and analyzes documentation from GitHub repositories to help developers understand projects faster.

Pricing

Pay per usage

Rating

0.0

(0)

Developer

Himanshi Rana

Himanshi Rana

Maintained by Community

Actor stats

0

Bookmarked

1

Total users

1

Monthly active users

3 days ago

Last modified

Share

πŸ€– GitHub Documentation Intelligence

AI-powered documentation extraction and analysis for GitHub repositories

Extract, structure, and analyze documentation from any GitHub repository in seconds. Perfect for building RAG systems, onboarding developers, and auditing documentation quality.


🎯 What It Does

Automatically extracts and structures:

  • βœ… README files - Main project documentation
  • βœ… Documentation folders - All markdown files from docs/, documentation/, etc.
  • βœ… Code documentation - Docstrings from Python, JavaScript, TypeScript files
  • βœ… Metadata - Repository info, stars, language, topics
  • βœ… Statistics - Word counts, file counts, documentation coverage

πŸš€ Quick Start

Input Example

{
"url": "https://github.com/pallets/flask",
"maxFiles": 20,
"extractCodeDocs": true
}

Output Example

{
"status": "success",
"metadata": {
"name": "flask",
"description": "The Python micro framework",
"language": "Python",
"stars": 65000,
"url": "https://github.com/pallets/flask"
},
"readme": {
"filename": "README.md",
"content": "...",
"sections": [...],
"word_count": 450
},
"documentation_files": [...],
"code_documentation": [...],
"combined_markdown": "...",
"statistics": {
"has_readme": true,
"documentation_files_count": 23,
"code_files_with_docs": 15,
"total_words": 12500,
"total_docstrings": 87
}
}

⭐ Key Features

πŸ“Š Comprehensive Extraction

  • Extracts README, docs folders, and code docstrings
  • Supports Python, JavaScript, TypeScript
  • Handles nested documentation structures
  • Preserves markdown formatting and sections

🎯 Structured Output

  • Clean JSON format ready for processing
  • Pydantic models for type safety
  • Combined markdown for easy reading
  • Detailed statistics and metadata

πŸ›‘οΈ Robust & Reliable

  • Proper error handling
  • Rate limit management
  • Partial success handling
  • Detailed logging

⚑ Fast & Efficient

  • Async operations
  • Smart file filtering
  • Configurable limits
  • Optimized API usage

πŸ’‘ Use Cases

πŸ€– RAG Systems

Extract clean documentation for training AI models:

# Use extracted docs for embeddings
docs = result['combined_markdown']
chunks = create_embeddings(docs)

πŸ‘¨β€πŸ’» Developer Onboarding

Generate comprehensive repo overviews:

  • Understand project structure
  • Find key documentation
  • Identify important files

πŸ“ˆ Documentation Audits

Analyze documentation quality:

  • Check completeness
  • Identify gaps
  • Track improvements

Enable semantic search over codebases:

  • Search through docstrings
  • Find relevant code examples
  • Understand APIs

πŸ”§ Configuration

For private repos and higher rate limits (5,000 vs 60 requests/hour):

  1. Go to https://github.com/settings/tokens
  2. Generate new token (classic)
  3. Select scopes: repo or public_repo
  4. Add to input: "githubToken": "ghp_your_token"

Options

OptionTypeDefaultDescription
maxFilesinteger100Maximum files to process
extractCodeDocsbooleantrueExtract code docstrings

πŸ“Š Statistics Provided

  • has_readme: Whether README exists
  • documentation_files_count: Number of doc files found
  • code_files_with_docs: Number of code files with docstrings
  • total_words: Total documentation words
  • total_lines: Total documentation lines
  • total_docstrings: Total docstrings extracted

πŸ› οΈ Development

Local Testing

# Install dependencies
pip install -r requirements.txt
# Run locally
apify run

Project Structure

.
β”œβ”€β”€ src/
β”‚ β”œβ”€β”€ main.py # Actor entry point
β”‚ β”œβ”€β”€ extractor.py # Extraction logic
β”‚ β”œβ”€β”€ models.py # Data models
β”‚ └── utils.py # Helper functions
β”œβ”€β”€ .actor/
β”‚ β”œβ”€β”€ actor.json # Actor configuration
β”‚ └── input_schema.json # Input schema
β”œβ”€β”€ requirements.txt # Dependencies
└── Dockerfile # Container config

🀝 Contributing

Issues and pull requests welcome! This is an active project participating in the Apify $1M Challenge.


πŸ“ License

Apache 2.0


πŸ’¬ Support

  • Questions? Join Apify Discord
  • Issues? Open a GitHub issue
  • Need help? Check Apify documentation

🎯 Coming Soon

  • πŸ”œ Documentation quality scoring (A-F grades)
  • πŸ”œ MCP server for AI agents
  • πŸ”œ Change detection and tracking
  • πŸ”œ Multi-repo comparison
  • πŸ”œ PDF documentation support
  • πŸ”œ Website documentation scraping

##FAQs Q: Why did extraction fail? A: Common reasons: 1.Repository doesn't exist (check URL) 2.Repository is private (add GitHub token) 3.Rate limit exceeded (add token for 5000/hour) 4.Repository is too large (reduce maxFiles)

Q: What if I hit rate limits? A:

Without token: 60 requests/hour With token: 5,000 requests/hour Get token: https://github.com/settings/tokens

Q: Can I extract from private repos? A: Yes! Add your GitHub token in the input: json{ "source": { "url": "...", "githubToken": "ghp_your_token" } } Q: What's the maximum repository size? A:

1.Max 500 files per run 2.Max 5MB per file 3.Max 50MB total data 4.Adjust maxFiles if needed

Q: Why are some files skipped? A: Files are skipped if they: 1.Are too large (>5MB) 2.Can't be decoded (binary files) 3.Cause encoding errors

Q: How long does extraction take? A: 1.Small repos (<100 files): 2-5 seconds 2.Medium repos (100-500 files): 10-30 seconds 3.Large repos (500+ files): 30-60 seconds 4.Max timeout: 4 minutes


Built with ❀️ for the Apify $1M Challenge

⭐ If you find this useful, please star the Actor!