All notable changes to X Tweet Scraper are documented here. Dates in UTC.
- High-parallel runs now wait longer on temporary 429 responses before moving on
to the next page or search term.
- Search terms and handle inputs now normalize accidental
_HH:MM:SS_UTC
suffixes on from:/to: handles and convert since:/until: UTC date-time
values to since_time:/until_time:.
- Custom validation error messages on
maxItems, lang, within, and
geocode for human-readable feedback.
type field enum in dataset schema for MCP agent discovery.
- Nested
properties on quoted_tweet and retweeted_tweet so AI agents can
chain on sub-fields.
/with_replies, /media, /likes, /highlights, /articles,
/superfollows accepted as profile URLs.
- Soft-cap truncation now emits a status message so users see when inputs were
capped.
- CHANGELOG linked from Apify Information tab.
- Dataset schema
required relaxed from ["id", "text"] to ["id"]. Protects
against X returning withheld or deleted tweets without text, which
previously caused entire batch pushes to fail with 400.
- Error and info exit paths no longer write billable rows to the default
dataset.
Actor.setStatusMessage() + logs only.
- Tweet-ID description corrected from "up to 100" to "up to 10,000 per run"
(chunked transparently).
maxItems default and prefill aligned at 100 to prevent 20x billing surprise.
- Default input prefills only
startUrls (previously also prefilled
twitterHandles, listIds, twitterContent, which were silently ignored by
routing).
createdAt.example in dataset schema corrected to X native format
(Tue Jun 02 20:12:29 +0000 2009).
- README drops "cheapest" claim; title raised to "50+ filters" matching actual
count.
- Redundant
required: ["maxItems"] removed from input schema (field has
default: 200).
Initial release on Apify Store.