X Feed Monitor
Pricing
Pay per usage
X Feed Monitor
Monitor X/Twitter in real-time using Grok's x_search API. Collect posts by keyword, hashtag, or @mention with engagement metrics. Optional sentiment analysis. Returns post IDs, timestamps, authors, and URLs. Perfect for brand monitoring, trend tracking, and social listening.
Pricing
Pay per usage
Rating
0.0
(0)
Developer
Quadruped
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
11 hours ago
Last modified
Categories
Share
Production-grade X/Twitter monitoring feed. Stable schema. Incremental sync. Not a scraper - infrastructure.
Features
- Canonical versioned schema for all posts (v1.0.0)
- Feed mode with cursor persistence (incremental pulls)
- Collect vs analyze separation - collection works even if analysis fails
- Hard cost limits - abort before exceeding budget
- Webhook notifications on completion
- Multiple export formats (dataset, JSON, CSV)
- Partial success allowed - one query failing doesn't kill the run
Quick Start
One-time fetch
{"queries": ["@anthropic", "#AI"],"mode": "collect","maxResults": 50,"includeMetrics": true}
Scheduled feed (incremental sync)
Run 1 (initial):
{"queries": ["brandname"],"persistState": true}
Result: Fetches latest 50 posts, saves cursor to state.
Run 2 (scheduled, 1 hour later):
{"queries": ["brandname"],"persistState": true}
Result: Only fetches NEW posts since Run 1 (uses stored cursor).
Run 3+: Continues incrementally.
With sentiment analysis
{"queries": ["@yourbrand"],"mode": "both","analysisLevel": "light","persistState": true}
Input Reference
| Field | Type | Default | Description |
|---|---|---|---|
queries | array | required | Keywords, hashtags, or accounts to monitor |
mode | string | "collect" | "collect", "analyze", or "both" |
maxResults | number | 50 | Hard cap per query (max 100) |
maxPages | number | 1 | Pagination limit per query |
maxCostUsd | number | - | Hard abort if projected cost exceeds this |
maxApiCalls | number | - | Alternative hard cap on API calls |
sinceId | string | - | Only fetch posts newer than this ID |
sinceMinutes | number | - | Only fetch posts from last N minutes |
includeReplies | boolean | true | Include reply posts |
includeReposts | boolean | true | Include repost/retweet posts |
includeQuotes | boolean | true | Include quote posts |
analysisLevel | string | "none" | "none", "light", or "full" |
includeMetrics | boolean | true | Include engagement metrics |
persistState | boolean | true | Save cursor for incremental runs |
stateKey | string | "STATE" | KV store key for state |
dedupe | boolean | true | Deduplicate by post_id within run |
estimateOnly | boolean | false | Output cost estimate without fetching |
webhook | string | - | URL to POST on completion |
outputFormat | string | "dataset" | Additional export format |
grokApiKey | string | - | xAI API key (or set GROK_API_KEY env var) |
Query format
Queries can be strings or objects:
{"queries": ["@anthropic","#AI",{"q": "machine learning","type": "keyword","label": "ML mentions","limit": 25}]}
Output Schema (v1.0.0)
Every post in the dataset has this structure:
{schema_version: "1.0.0",platform: "x",post_id: string,url: string,created_at: string, // ISO8601collected_at: string, // ISO8601author_handle: string,author_id: string | null,post_type: "post" | "reply" | "repost" | "quote",text: string,lang: string | null,metrics: { // null if includeMetrics: falselikes: number,reposts: number,replies: number,views: number | null} | null,analysis: { // null if mode: "collect"level: "light" | "full",model: string,prompt_version: string,sentiment: "positive" | "negative" | "neutral" | "mixed",sentiment_score?: number,topics: string[],summary: string | null,risk_flags: string[]} | null,query_context: {query: string,query_hash: string,label: string | null,tool: string,run_id: string,cursor: string | null},source: {provider: "grok",endpoint: string,api_version: string}}
Important: metrics and analysis fields are always present. They are null when not populated, never omitted.
Cost Expectations
| Scenario | Est. API calls | Est. cost |
|---|---|---|
| 5 queries × 50 posts, collect only | 5 | ~$0.06 |
| 5 queries × 50 posts, with light analysis | 10 | ~$0.15 |
| 10 queries × 100 posts, full analysis | 30 | ~$0.75 |
Use estimateOnly: true to get a cost estimate without making API calls.
Saved Task Templates
Brand monitoring
{"queries": [{ "q": "@yourbrand", "label": "mentions" },{ "q": "\"your brand\"", "label": "keywords" }],"mode": "both","analysisLevel": "light","persistState": true}
Competitor tracking
{"queries": [{ "q": "@competitor1", "label": "comp1" },{ "q": "@competitor2", "label": "comp2" }],"mode": "collect","persistState": true}
Trend snapshot
{"queries": ["#trending_topic"],"mode": "analyze","analysisLevel": "full","maxResults": 100}
AI agent pipeline
{"queries": ["AI agents", "autonomous AI"],"mode": "both","analysisLevel": "light","sinceMinutes": 60,"webhook": "https://your-api.com/webhook","outputFormat": "json"}
MCP Integration
This actor can be called from MCP-enabled AI agents:
{"name": "x_feed_monitor","description": "Monitor X/Twitter for keywords, hashtags, or accounts","parameters": {"queries": ["array of search queries"],"mode": "collect | analyze | both","maxResults": 50}}
Error Handling
Partial success behavior
- One query failing does NOT abort the run
- Errors are logged and included in the run summary
- Posts from successful queries are still collected
- Final status:
"partial"if any errors,"success"if none,"failed"if ALL queries failed
Error array format
{query?: string,post_id?: string,stage: "collect" | "analyze" | "output",message: string,timestamp: string}
Analysis failure isolation
- Analysis runs per-post or in small batches
- If analysis fails for a post, it gets
analysis: null - Collection results are preserved regardless of analysis failures
Run Summary
Available in KV store key OUTPUT:
{run_id: string,started_at: string,completed_at: string,status: "success" | "partial" | "failed",total_posts: number,total_analyzed: number,total_errors: number,queries: { [hash]: QueryResult },errors: ErrorEntry[],api_calls: number,estimated_cost_usd: number,dataset_id: string,dataset_url: string,exports?: { json?: string, csv?: string }}
Webhook Payload
POST to webhook URL on completion:
{"run_id": "abc123","status": "success","total_posts": 150,"dataset_url": "https://api.apify.com/v2/datasets/xyz/items","errors_count": 0}
API Key Setup
- Go to console.x.ai
- Sign in with your X/Twitter account
- Create an API key
- Enter it in the
grokApiKeyinput field or setGROK_API_KEYenvironment variable
API is free until June 2025.
Changelog
v1.0.0
- Initial release
- Canonical schema v1.0.0
- Incremental sync with cursor persistence
- Collect/analyze mode separation
- Cost estimation and limits
- Webhook notifications
- JSON/CSV exports