GETTR Scraper: Profiles, Posts, Timelines & Search
Pricing
from $1.00 / 1,000 results
GETTR Scraper: Profiles, Posts, Timelines & Search
Scrape public GETTR profiles, user timelines, posts, search results, media URLs, author data, and engagement metrics with a fast Python API-first Apify Actor.
Pricing
from $1.00 / 1,000 results
Rating
0.0
(0)
Developer
Inus Grobler
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
8 days ago
Last modified
Categories
Share
GETTR Scraper for Apify
Scrape public GETTR profiles, user timelines, posts, and search results with a fast Python Apify Actor. This GETTR scraper is built for OSINT research, social media monitoring, journalism, brand safety, disinformation analysis, and data engineering workflows that need structured GETTR data without running a browser.
The Actor uses GETTR's public web API endpoints with async HTTP requests. It is written strictly in Python with the official Apify Python SDK, httpx, Apify Proxy support, cursor pagination, retries, and standardized JSON output.
What You Can Scrape
- GETTR user profiles by username or profile URL
- Public GETTR user timelines and posts
- Individual GETTR posts from post URLs
- Keyword and hashtag search results
- Profile search results when post search is unavailable for a term
- Images, videos, post stats, author details, and raw API fragments for auditability
Why Use This GETTR Scraper
This Actor is designed for high-throughput API-first scraping. It avoids Playwright, Selenium, Node.js, TypeScript, and Cheerio, so runs are lighter, faster, and cheaper than browser automation for public GETTR data collection.
Use it when you need a GETTR API scraper, GETTR profile scraper, GETTR post scraper, GETTR timeline scraper, GETTR search scraper, or social media OSINT dataset that can run on Apify infrastructure with proxy rotation and structured dataset exports.
Input
You can configure the Actor from Apify Console, API, CLI, scheduler, or integrations.
| Field | Type | Description |
|---|---|---|
startUrls | array of strings | GETTR profile, post, or search URLs. |
usernames | array of strings | GETTR usernames, with or without @. |
searchTerms | array of strings | Keywords or hashtags to search. |
maxItems | integer | Maximum number of post/profile records to save. |
maxConcurrency | integer | Advanced setting for maximum concurrent HTTP requests. |
includeRawData | boolean | Advanced setting to include original GETTR API fragments. Disabled by default for clean exports. |
proxyConfiguration | object | Advanced Apify Proxy or custom proxy settings. |
Example input:
{"startUrls": ["https://gettr.com/user/support","https://gettr.com/post/p2bipx0ac19"],"usernames": ["support"],"searchTerms": ["trump", "#news"],"maxItems": 100,"maxConcurrency": 12,"includeRawData": false,"proxyConfiguration": {"useApifyProxy": true}}
At least one of startUrls, usernames, or searchTerms must be provided.
Output
Results are saved to the default Apify Dataset. Each item has type: "post" or type: "user" and includes normalized fields for clean exports.
By default, output is clean and export-friendly. Enable includeRawData only when you need original GETTR API fragments for audit or advanced OSINT analysis.
Example post item:
{"type": "post","source": "user_timeline","query": "support","id": "p2bipx0ac19","url": "https://gettr.com/post/p2bipx0ac19","text": "Post text...","createdAt": "2023-03-15T12:40:43.158000+00:00","likeCount": 6298,"repostCount": 8455,"replyCount": 4830,"mediaUrls": ["https://media.gettr.com/group/example/image.jpg"],"authorUsername": "milesguo","authorDisplayName": "MILES GUO"}
Example user item:
{"type": "user","source": "profile","query": "support","id": "support","username": "support","displayName": "Support & Help","bio": "GETTR Support (Official Account)","followersCount": 4105269,"followingCount": 1,"profileImageUrl": "https://media.gettr.com/group/example/avatar.png","isVerified": true}
Data Fields
Post records can include:
- Post ID and canonical GETTR URL
- Text content and detected language
- Created and updated timestamps
- Like, repost, reply, and view counts
- Media URLs for images and videos
- Link preview metadata
- Flat author fields for exports, including username, display name, profile image, follower count, following count, and verification status
- Optional raw event, post, stats, and user API fragments
User records can include:
- Username and user ID
- Display name and bio
- Location, website, and language
- Follower and following counts
- Verification status and badges
- Profile image and banner URLs
- Created and updated timestamps
- Optional raw user API fragment
Proxy Configuration
The Actor supports Apify Proxy through the standard proxyConfiguration input. Automatic Apify Proxy selection is the safest default for most users. Residential proxies can help for larger GETTR scraping jobs, but the Apify account running the Actor must have access to the selected proxy group.
For high-volume scraping, keep maxConcurrency conservative and use Apify Proxy sessions to reduce rate-limit and blocking risk.
How It Works
The scraper targets GETTR web API endpoints used by the public frontend, including profile lookup, user timeline pagination, post lookup, post search, and profile search. It handles cursor pagination, retries transient 429 and 5xx errors with exponential backoff, rotates user agents, and creates Apify proxy sessions per target.
No browser automation is used by default. This keeps the Actor efficient and production-friendly for public GETTR data extraction.
Use Cases
- OSINT investigations involving public GETTR accounts
- Social media monitoring and trend tracking
- Political, media, and influence research
- Dataset creation for NLP, entity extraction, and network analysis
- Brand safety and reputation monitoring
- Archiving public GETTR posts and profile metadata
Limitations
GETTR is an undocumented, changing web API. Some search endpoints may behave differently by term, region, proxy, or time. The Actor includes fallback logic, but endpoint changes can still affect results. Private content, logged-in-only data, and deleted posts are not scraped.
Always use this Actor responsibly and comply with applicable laws, platform terms, and privacy requirements.
Local Development
Install dependencies and run with the Apify CLI:
pip install -r requirements.txtapify run
Deploy to Apify:
$apify push