X Feed Monitor avatar
X Feed Monitor

Pricing

Pay per usage

Go to Apify Store
X Feed Monitor

X Feed Monitor

Monitor X/Twitter in real-time using Grok's x_search API. Collect posts by keyword, hashtag, or @mention with engagement metrics. Optional sentiment analysis. Returns post IDs, timestamps, authors, and URLs. Perfect for brand monitoring, trend tracking, and social listening.

Pricing

Pay per usage

Rating

0.0

(0)

Developer

Quadruped

Quadruped

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

1

Monthly active users

11 hours ago

Last modified

Share

Production-grade X/Twitter monitoring feed. Stable schema. Incremental sync. Not a scraper - infrastructure.

Features

  • Canonical versioned schema for all posts (v1.0.0)
  • Feed mode with cursor persistence (incremental pulls)
  • Collect vs analyze separation - collection works even if analysis fails
  • Hard cost limits - abort before exceeding budget
  • Webhook notifications on completion
  • Multiple export formats (dataset, JSON, CSV)
  • Partial success allowed - one query failing doesn't kill the run

Quick Start

One-time fetch

{
"queries": ["@anthropic", "#AI"],
"mode": "collect",
"maxResults": 50,
"includeMetrics": true
}

Scheduled feed (incremental sync)

Run 1 (initial):

{
"queries": ["brandname"],
"persistState": true
}

Result: Fetches latest 50 posts, saves cursor to state.

Run 2 (scheduled, 1 hour later):

{
"queries": ["brandname"],
"persistState": true
}

Result: Only fetches NEW posts since Run 1 (uses stored cursor).

Run 3+: Continues incrementally.

With sentiment analysis

{
"queries": ["@yourbrand"],
"mode": "both",
"analysisLevel": "light",
"persistState": true
}

Input Reference

FieldTypeDefaultDescription
queriesarrayrequiredKeywords, hashtags, or accounts to monitor
modestring"collect""collect", "analyze", or "both"
maxResultsnumber50Hard cap per query (max 100)
maxPagesnumber1Pagination limit per query
maxCostUsdnumber-Hard abort if projected cost exceeds this
maxApiCallsnumber-Alternative hard cap on API calls
sinceIdstring-Only fetch posts newer than this ID
sinceMinutesnumber-Only fetch posts from last N minutes
includeRepliesbooleantrueInclude reply posts
includeRepostsbooleantrueInclude repost/retweet posts
includeQuotesbooleantrueInclude quote posts
analysisLevelstring"none""none", "light", or "full"
includeMetricsbooleantrueInclude engagement metrics
persistStatebooleantrueSave cursor for incremental runs
stateKeystring"STATE"KV store key for state
dedupebooleantrueDeduplicate by post_id within run
estimateOnlybooleanfalseOutput cost estimate without fetching
webhookstring-URL to POST on completion
outputFormatstring"dataset"Additional export format
grokApiKeystring-xAI API key (or set GROK_API_KEY env var)

Query format

Queries can be strings or objects:

{
"queries": [
"@anthropic",
"#AI",
{
"q": "machine learning",
"type": "keyword",
"label": "ML mentions",
"limit": 25
}
]
}

Output Schema (v1.0.0)

Every post in the dataset has this structure:

{
schema_version: "1.0.0",
platform: "x",
post_id: string,
url: string,
created_at: string, // ISO8601
collected_at: string, // ISO8601
author_handle: string,
author_id: string | null,
post_type: "post" | "reply" | "repost" | "quote",
text: string,
lang: string | null,
metrics: { // null if includeMetrics: false
likes: number,
reposts: number,
replies: number,
views: number | null
} | null,
analysis: { // null if mode: "collect"
level: "light" | "full",
model: string,
prompt_version: string,
sentiment: "positive" | "negative" | "neutral" | "mixed",
sentiment_score?: number,
topics: string[],
summary: string | null,
risk_flags: string[]
} | null,
query_context: {
query: string,
query_hash: string,
label: string | null,
tool: string,
run_id: string,
cursor: string | null
},
source: {
provider: "grok",
endpoint: string,
api_version: string
}
}

Important: metrics and analysis fields are always present. They are null when not populated, never omitted.

Cost Expectations

ScenarioEst. API callsEst. cost
5 queries × 50 posts, collect only5~$0.06
5 queries × 50 posts, with light analysis10~$0.15
10 queries × 100 posts, full analysis30~$0.75

Use estimateOnly: true to get a cost estimate without making API calls.

Saved Task Templates

Brand monitoring

{
"queries": [
{ "q": "@yourbrand", "label": "mentions" },
{ "q": "\"your brand\"", "label": "keywords" }
],
"mode": "both",
"analysisLevel": "light",
"persistState": true
}

Competitor tracking

{
"queries": [
{ "q": "@competitor1", "label": "comp1" },
{ "q": "@competitor2", "label": "comp2" }
],
"mode": "collect",
"persistState": true
}

Trend snapshot

{
"queries": ["#trending_topic"],
"mode": "analyze",
"analysisLevel": "full",
"maxResults": 100
}

AI agent pipeline

{
"queries": ["AI agents", "autonomous AI"],
"mode": "both",
"analysisLevel": "light",
"sinceMinutes": 60,
"webhook": "https://your-api.com/webhook",
"outputFormat": "json"
}

MCP Integration

This actor can be called from MCP-enabled AI agents:

{
"name": "x_feed_monitor",
"description": "Monitor X/Twitter for keywords, hashtags, or accounts",
"parameters": {
"queries": ["array of search queries"],
"mode": "collect | analyze | both",
"maxResults": 50
}
}

Error Handling

Partial success behavior

  • One query failing does NOT abort the run
  • Errors are logged and included in the run summary
  • Posts from successful queries are still collected
  • Final status: "partial" if any errors, "success" if none, "failed" if ALL queries failed

Error array format

{
query?: string,
post_id?: string,
stage: "collect" | "analyze" | "output",
message: string,
timestamp: string
}

Analysis failure isolation

  • Analysis runs per-post or in small batches
  • If analysis fails for a post, it gets analysis: null
  • Collection results are preserved regardless of analysis failures

Run Summary

Available in KV store key OUTPUT:

{
run_id: string,
started_at: string,
completed_at: string,
status: "success" | "partial" | "failed",
total_posts: number,
total_analyzed: number,
total_errors: number,
queries: { [hash]: QueryResult },
errors: ErrorEntry[],
api_calls: number,
estimated_cost_usd: number,
dataset_id: string,
dataset_url: string,
exports?: { json?: string, csv?: string }
}

Webhook Payload

POST to webhook URL on completion:

{
"run_id": "abc123",
"status": "success",
"total_posts": 150,
"dataset_url": "https://api.apify.com/v2/datasets/xyz/items",
"errors_count": 0
}

API Key Setup

  1. Go to console.x.ai
  2. Sign in with your X/Twitter account
  3. Create an API key
  4. Enter it in the grokApiKey input field or set GROK_API_KEY environment variable

API is free until June 2025.

Changelog

v1.0.0

  • Initial release
  • Canonical schema v1.0.0
  • Incremental sync with cursor persistence
  • Collect/analyze mode separation
  • Cost estimation and limits
  • Webhook notifications
  • JSON/CSV exports