Universal RSS Feed Parser | $1/1K | Any RSS/Atom/RDF
Pricing
from $1.00 / 1,000 feed item extracteds
Universal RSS Feed Parser | $1/1K | Any RSS/Atom/RDF
Parse any RSS, Atom, or RDF feed. Extract articles with title, link, date, author, content, categories. Perfect for AI agents, Make/n8n/Zapier workflows.
Pricing
from $1.00 / 1,000 feed item extracteds
Rating
0.0
(0)
Developer
Apivault Labs
Maintained by CommunityActor stats
0
Bookmarked
2
Total users
1
Monthly active users
2 days ago
Last modified
Categories
Share
📰 Universal RSS Feed Parser | $1/1K | Any RSS/Atom/RDF Feed
Parse any RSS 2.0, Atom, or RDF feed and get structured article data: title, link, author, date, content, categories, media. Works on millions of blogs, news sites, podcasts, Substacks, Medium publications, YouTube channels.
✨ Key Features
- 📋 3 formats supported: RSS 2.0, Atom, RDF (RSS 1.0)
- 🔁 Batch feeds — fetch 100s of feeds in parallel
- 🧹 HTML stripping — clean plain text for AI / search indexing
- 🏷️ Full metadata — author, date, categories, enclosures (podcast MP3s)
- 🖼️ Media extraction — thumbnails, enclosures, media:content
- 📡 Podcast-aware — handles iTunes namespace
- 🔒 Zero API keys
Input
Single feed
{"feedUrls": ["https://hnrss.org/frontpage"]}
Bulk news aggregator
{"feedUrls": ["https://techcrunch.com/feed/","https://www.theverge.com/rss/index.xml","https://feeds.arstechnica.com/arstechnica/index"],"maxItems": 50,"stripHtml": true}
Medium / Substack publications
{"feedUrls": ["https://medium.com/feed/@user","https://example.substack.com/feed"]}
YouTube channel as RSS
{"feedUrls": ["https://www.youtube.com/feeds/videos.xml?channel_id=UCxyz..."]}
Input Parameters
| Field | Type | Required | Description |
|---|---|---|---|
feedUrls | string[] | ✅ | RSS/Atom/RDF feed URLs |
maxItems | int | ❌ | Limit per feed (0 = all). Default: 100 |
stripHtml | bool | ❌ | Plain text (true) or preserve HTML (false). Default: true |
extractFeedMetadata | bool | ❌ | Include feed title/description on each item |
maxConcurrency | int | ❌ | Parallel feed fetches (default: 5) |
timeout | int | ❌ | HTTP timeout per feed (default: 20) |
Output
One dataset row per item:
{"success": true,"feedUrl": "https://hnrss.org/frontpage","feedTitle": "Hacker News: Front Page","feedLink": "https://news.ycombinator.com/","feedDescription": "Hacker News RSS","feedLanguage": "","title": "Show HN: I built a thing","link": "https://news.ycombinator.com/item?id=...","guid": "https://news.ycombinator.com/item?id=...","pubDate": "Sun, 11 May 2026 10:00:00 +0000","author": "username","description": "Short summary here","content": "Full article content...","categories": "Show HN, Software","comments": "https://news.ycombinator.com/item?id=..."}
Use Cases
🤖 AI Agents / LLM Pipelines
- Feed fresh content into GPT/Claude for summarization
- RAG systems monitoring industry news
- Automated briefing generation
📊 News Aggregation
- Build a niche news site pulling from 50 sources
- Slack/Discord digest bots
- Daily email digests (Make/Zapier + this actor)
🎙️ Podcast Monitoring
- Extract podcast episodes with audio URLs (enclosures)
- Feed transcription pipelines
- Build podcast discovery sites
📰 Medium / Substack Tracking
- Follow competitor newsletters automatically
- Aggregate industry thought leaders
🎬 YouTube Channel Tracking
- YouTube offers RSS feeds per channel (
/feeds/videos.xml?channel_id=) - Get new video notifications without YouTube API quotas
🔍 SEO / Content Research
- Analyze competitor publishing frequency
- Content gap analysis
Pricing
- $0.001 per item ($1 per 1,000 items)
- Pay only for items actually parsed
- Broken feeds skip cleanly — no charge
How it works
- HTTP GET the feed URL
- Detect format (RSS 2.0 / Atom / RDF) from root element
- Parse items/entries with namespace-aware stdlib XML
- Extract title, link, date, author, content, categories, media
- Optionally strip HTML from content for clean text
Uses Python's built-in xml.etree.ElementTree — no third-party dependencies, fast cold starts.
Notes
- Works on any public feed — no auth
- Some feeds rate-limit aggressive re-polling (respectful defaults)
- Malformed XML feeds return an error per feed, other feeds continue
- Some feeds only return last 10-20 items regardless of
maxItems
Pro tips
- Automation: Apify Scheduler → every hour → Zapier → Slack = real-time news bot
- YouTube RSS: use
/feeds/videos.xml?channel_id=<id>for any YouTube channel (free, no API) - Substack: append
/feedto any Substack URL - Medium:
medium.com/feed/@userormedium.com/feed/publication-name - Podcasts: iTunes podcast URLs have RSS feeds — find them via
https://podcastindex.org