Universal RSS Feed Parser | $1/1K | Any RSS/Atom/RDF avatar

Universal RSS Feed Parser | $1/1K | Any RSS/Atom/RDF

Pricing

from $1.00 / 1,000 feed item extracteds

Go to Apify Store
Universal RSS Feed Parser | $1/1K | Any RSS/Atom/RDF

Universal RSS Feed Parser | $1/1K | Any RSS/Atom/RDF

Parse any RSS, Atom, or RDF feed. Extract articles with title, link, date, author, content, categories. Perfect for AI agents, Make/n8n/Zapier workflows.

Pricing

from $1.00 / 1,000 feed item extracteds

Rating

0.0

(0)

Developer

Apivault Labs

Apivault Labs

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

1

Monthly active users

2 days ago

Last modified

Categories

Share

📰 Universal RSS Feed Parser | $1/1K | Any RSS/Atom/RDF Feed

Parse any RSS 2.0, Atom, or RDF feed and get structured article data: title, link, author, date, content, categories, media. Works on millions of blogs, news sites, podcasts, Substacks, Medium publications, YouTube channels.

✨ Key Features

  • 📋 3 formats supported: RSS 2.0, Atom, RDF (RSS 1.0)
  • 🔁 Batch feeds — fetch 100s of feeds in parallel
  • 🧹 HTML stripping — clean plain text for AI / search indexing
  • 🏷️ Full metadata — author, date, categories, enclosures (podcast MP3s)
  • 🖼️ Media extraction — thumbnails, enclosures, media:content
  • 📡 Podcast-aware — handles iTunes namespace
  • 🔒 Zero API keys

Input

Single feed

{
"feedUrls": ["https://hnrss.org/frontpage"]
}

Bulk news aggregator

{
"feedUrls": [
"https://techcrunch.com/feed/",
"https://www.theverge.com/rss/index.xml",
"https://feeds.arstechnica.com/arstechnica/index"
],
"maxItems": 50,
"stripHtml": true
}

Medium / Substack publications

{
"feedUrls": [
"https://medium.com/feed/@user",
"https://example.substack.com/feed"
]
}

YouTube channel as RSS

{
"feedUrls": [
"https://www.youtube.com/feeds/videos.xml?channel_id=UCxyz..."
]
}

Input Parameters

FieldTypeRequiredDescription
feedUrlsstring[]RSS/Atom/RDF feed URLs
maxItemsintLimit per feed (0 = all). Default: 100
stripHtmlboolPlain text (true) or preserve HTML (false). Default: true
extractFeedMetadataboolInclude feed title/description on each item
maxConcurrencyintParallel feed fetches (default: 5)
timeoutintHTTP timeout per feed (default: 20)

Output

One dataset row per item:

{
"success": true,
"feedUrl": "https://hnrss.org/frontpage",
"feedTitle": "Hacker News: Front Page",
"feedLink": "https://news.ycombinator.com/",
"feedDescription": "Hacker News RSS",
"feedLanguage": "",
"title": "Show HN: I built a thing",
"link": "https://news.ycombinator.com/item?id=...",
"guid": "https://news.ycombinator.com/item?id=...",
"pubDate": "Sun, 11 May 2026 10:00:00 +0000",
"author": "username",
"description": "Short summary here",
"content": "Full article content...",
"categories": "Show HN, Software",
"comments": "https://news.ycombinator.com/item?id=..."
}

Use Cases

🤖 AI Agents / LLM Pipelines

  • Feed fresh content into GPT/Claude for summarization
  • RAG systems monitoring industry news
  • Automated briefing generation

📊 News Aggregation

  • Build a niche news site pulling from 50 sources
  • Slack/Discord digest bots
  • Daily email digests (Make/Zapier + this actor)

🎙️ Podcast Monitoring

  • Extract podcast episodes with audio URLs (enclosures)
  • Feed transcription pipelines
  • Build podcast discovery sites

📰 Medium / Substack Tracking

  • Follow competitor newsletters automatically
  • Aggregate industry thought leaders

🎬 YouTube Channel Tracking

  • YouTube offers RSS feeds per channel (/feeds/videos.xml?channel_id=)
  • Get new video notifications without YouTube API quotas

🔍 SEO / Content Research

  • Analyze competitor publishing frequency
  • Content gap analysis

Pricing

  • $0.001 per item ($1 per 1,000 items)
  • Pay only for items actually parsed
  • Broken feeds skip cleanly — no charge

How it works

  1. HTTP GET the feed URL
  2. Detect format (RSS 2.0 / Atom / RDF) from root element
  3. Parse items/entries with namespace-aware stdlib XML
  4. Extract title, link, date, author, content, categories, media
  5. Optionally strip HTML from content for clean text

Uses Python's built-in xml.etree.ElementTree — no third-party dependencies, fast cold starts.

Notes

  • Works on any public feed — no auth
  • Some feeds rate-limit aggressive re-polling (respectful defaults)
  • Malformed XML feeds return an error per feed, other feeds continue
  • Some feeds only return last 10-20 items regardless of maxItems

Pro tips

  • Automation: Apify Scheduler → every hour → Zapier → Slack = real-time news bot
  • YouTube RSS: use /feeds/videos.xml?channel_id=<id> for any YouTube channel (free, no API)
  • Substack: append /feed to any Substack URL
  • Medium: medium.com/feed/@user or medium.com/feed/publication-name
  • Podcasts: iTunes podcast URLs have RSS feeds — find them via https://podcastindex.org