RSS Feed Reader
Pricing
Pay per event
RSS Feed Reader
This actor fetches and parses RSS 2.0, Atom, and RSS 1.0 (RDF) feeds into clean, structured JSON data. It extracts article titles, links, publication dates, authors, categories, descriptions, and full content. Use it to monitor news sources, aggregate blog posts, or build content pipelines.
Pricing
Pay per event
Rating
0.0
(0)
Developer
Stas Persiianenko
Actor stats
1
Bookmarked
8
Total users
5
Monthly active users
12 hours ago
Last modified
Categories
Share
Parse RSS and Atom feeds into structured data. Extract titles, links, dates, authors, categories, and content from any feed URL.
What does RSS Feed Reader do?
This actor fetches and parses RSS 2.0, Atom, and RSS 1.0 (RDF) feeds into clean, structured JSON data. It extracts article titles, links, publication dates, authors, categories, descriptions, and full content from any number of feed URLs. Use it to monitor news sources, aggregate blog posts, build content pipelines, or track competitor updates across the web.
Who is it for?
- ๐ Content marketers โ monitoring competitor blogs and industry news feeds
- ๐ Media analysts โ aggregating news from multiple RSS sources for trend analysis
- ๐ป Developers โ building automated content pipelines from RSS feed data
- ๐ฐ Journalists โ tracking breaking news across multiple publication feeds
- ๐ข Business intelligence teams โ monitoring industry publications for competitive insights
Use cases
- Content marketer aggregating blog posts and news articles from industry sources for a daily digest or newsletter
- Competitive analyst monitoring competitor blogs, press releases, and product announcements via their RSS feeds
- Data engineer building a content pipeline that ingests articles from multiple news sources into a data warehouse
- Community manager tracking subreddit feeds, Hacker News, or forum RSS feeds for brand mentions and trending topics
- Researcher collecting academic publication feeds or government announcement feeds for analysis
Why use RSS Feed Reader?
- Multiple feed formats -- supports RSS 2.0, Atom, and RSS 1.0 (RDF) feeds out of the box
- Batch processing -- parse dozens of feed URLs in a single run instead of fetching them one at a time
- Rich metadata extraction -- gets title, link, description, full content, author, categories, publication date, GUID, and image URL for each item
- Configurable item limits -- set
maxItemsPerFeedfrom 1 to 1,000 to control how many items you extract per feed - Structured JSON output -- every feed item is a clean JSON object ready for downstream processing, storage, or integration
- Pay-per-event pricing -- only $0.001 per feed item extracted, plus a one-time start fee
Input parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
feeds | string[] | Yes | -- | List of RSS or Atom feed URLs to parse |
maxItemsPerFeed | integer | No | 50 | Maximum items to extract per feed (1--1000) |
{"feeds": ["https://feeds.bbci.co.uk/news/rss.xml","https://hnrss.org/frontpage","https://www.reddit.com/r/technology/.rss"],"maxItemsPerFeed": 50}
Output example
Each feed item produces a result object with the following fields:
| Field | Type | Description |
|---|---|---|
feedUrl | string | The source feed URL |
feedTitle | string | Title of the feed channel |
feedType | string | Feed format: rss, atom, or rdf |
title | string | Article or item title |
link | string | URL to the full article |
description | string | Short summary or excerpt |
content | string | Full article content (if available) |
author | string | Author name |
publishedAt | string | Publication date in ISO 8601 format |
updatedAt | string | Last updated date (if available) |
categories | array | List of category or tag strings |
guid | string | Unique identifier for the item |
imageUrl | string | URL of the associated image (if available) |
error | string | Error message if the feed could not be fetched or parsed |
fetchedAt | string | Timestamp when the feed was fetched |
{"feedUrl": "https://hnrss.org/frontpage","feedTitle": "Hacker News: Front Page","feedType": "atom","title": "Show HN: An open-source project","link": "https://example.com/article","description": "A brief description of the article","content": null,"author": "username","publishedAt": "2026-03-01T10:30:00.000Z","updatedAt": null,"categories": ["technology"],"guid": "https://news.ycombinator.com/item?id=12345","imageUrl": null,"error": null,"fetchedAt": "2026-03-01T12:00:00.000Z"}
How much does it cost to parse RSS feeds?
| Event | Price | Description |
|---|---|---|
| Start | $0.035 | One-time per run |
| Item parsed | $0.001 | Per feed item extracted |
Example costs:
- 3 feeds x 20 items each = $0.035 + (60 x $0.001) = $0.095
- 10 feeds x 50 items each = $0.035 + (500 x $0.001) = $0.535
- 1 feed x 100 items = $0.035 + (100 x $0.001) = $0.135
How to read RSS feeds with Apify
- Go to the RSS Feed Reader on Apify Store.
- Enter one or more RSS or Atom feed URLs in the Feeds field.
- Set the Max items per feed limit (default is 50).
- Click Start and wait for the run to finish.
- Download your structured feed data in JSON, CSV, or Excel format.
Using the Apify API
You can start RSS Feed Reader programmatically from your own applications using the Apify API. Below are examples in Node.js and Python.
Node.js
import { ApifyClient } from 'apify-client';const client = new ApifyClient({ token: 'YOUR_TOKEN' });const run = await client.actor('automation-lab/rss-feed-reader').call({feeds: ['https://feeds.bbci.co.uk/news/rss.xml'],maxItemsPerFeed: 20,});const { items } = await client.dataset(run.defaultDatasetId).listItems();console.log(items);
Python
from apify_client import ApifyClientclient = ApifyClient('YOUR_TOKEN')run = client.actor('automation-lab/rss-feed-reader').call(run_input={'feeds': ['https://feeds.bbci.co.uk/news/rss.xml'],'maxItemsPerFeed': 20,})items = client.dataset(run['defaultDatasetId']).list_items().itemsprint(items)
cURL
curl -X POST "https://api.apify.com/v2/acts/automation-lab~rss-feed-reader/runs?token=YOUR_TOKEN" \-H "Content-Type: application/json" \-d '{"feeds": ["https://feeds.bbci.co.uk/news/rss.xml"],"maxItemsPerFeed": 20}'
Use with Claude AI (MCP)
This actor is available as a tool in Claude AI through the Model Context Protocol (MCP). You can add it to Claude Desktop, Cursor, Windsurf, or any MCP-compatible client to let Claude parse RSS feeds directly in your conversation.
Setup
Add the Apify MCP server to your MCP client configuration:
Claude Desktop (claude_desktop_config.json):
{"mcpServers": {"apify": {"command": "npx","args": ["-y", "@anthropic-ai/mcp-apify"],"env": {"APIFY_TOKEN": "your-apify-api-token"}}}}
Claude Code CLI:
$claude mcp add apify -- npx -y @anthropic-ai/mcp-apify
Set your Apify token as an environment variable or pass it directly.
Example prompts
Once configured, you can ask Claude things like:
- "Parse this RSS feed and get the latest 20 articles: https://hnrss.org/frontpage"
- "Monitor these news feeds for new posts: BBC News, TechCrunch, and Hacker News"
- "Get the latest articles from https://feeds.bbci.co.uk/news/rss.xml and summarize the top 5"
Integrations
RSS Feed Reader works with the full Apify integration ecosystem. Connect it to Make (formerly Integromat), Zapier, n8n, or Slack to receive notifications when new articles are published. Export results directly to Google Sheets, Amazon S3, or any webhook endpoint. You can also schedule recurring runs on the Apify platform to poll feeds at regular intervals (e.g., every hour) and build a continuously updated content database.
Tips and best practices
- Schedule regular runs -- set up a scheduled run (e.g., hourly or daily) to continuously monitor feeds. Use the
fetchedAttimestamp to identify new items since your last run. - Use
maxItemsPerFeedto control costs -- if you only need the latest headlines, set this to 10 or 20 instead of the default 50 to reduce the number of billed items. - Combine multiple source types -- mix news feeds, blog feeds, Reddit RSS, and Hacker News feeds in a single run to build a comprehensive content aggregation pipeline.
- Check the
errorfield -- if a feed URL is unreachable or returns invalid XML, the error field will contain a descriptive message. Use this to monitor feed health. - Deduplicate by
guid-- when polling feeds on a schedule, use theguidfield to identify and skip items you have already processed in previous runs.
Legality
This tool analyzes publicly accessible web content. Automated analysis of public web resources is standard practice in SEO and web development. Always respect robots.txt directives and rate limits when analyzing third-party websites. For personal data processing, ensure compliance with applicable privacy regulations.
FAQ
What feed formats are supported? The actor supports RSS 2.0, Atom (1.0), and RSS 1.0 (RDF) feeds. These cover the vast majority of feeds on the web. JSON Feed format is not currently supported.
Can I parse password-protected or authenticated feeds? The actor fetches feeds over HTTP/HTTPS without authentication. If your feed requires login credentials or API keys, it will not be accessible. Consider using a proxy or pre-fetching the feed content.
How often should I schedule runs for monitoring?
It depends on how frequently your target feeds update. For major news sites, hourly runs capture most updates. For blogs or low-volume feeds, daily runs are usually sufficient. Use the publishedAt field to filter for new items.
What happens if a feed URL is temporarily down?
The actor will attempt to fetch the feed and, if it fails, will produce a result with the error field describing the issue (e.g., timeout, HTTP 404, or invalid XML). Other feeds in the same batch will continue processing normally.
The actor returns an error for a feed URL. How do I troubleshoot?
Check that the URL points directly to an RSS or Atom XML file, not an HTML page. Many websites have moved their feed URLs over time. Try appending /feed, /rss, or /rss.xml to the site URL if the original feed link no longer works. Also check that the feed is publicly accessible without authentication.
Why are some fields like content or author null?
Not all feeds include every field. Some publishers only provide a short description and omit full content. Similarly, author is optional in RSS 2.0 and may not be set by the feed publisher. The actor extracts whatever the feed provides.
Can I get the full article content from feeds?
It depends on the feed. Some feeds include full article content in the content or description fields, while others only include a short excerpt. The actor extracts whatever the feed provides. For full content, combine this actor with a web scraper to fetch the linked page.
Other SEO tools
- Sitemap URL Extractor โ Extract all URLs from XML sitemaps
- Security.txt Checker โ Check websites for security.txt compliance
- SEO Title & Description Checker โ Validate page titles and meta descriptions
- Broken Link Checker โ Find broken links on any website
- Heading Structure Checker โ Validate H1-H6 heading hierarchy
- Keyword Density Analyzer โ Analyze keyword usage on web pages