Telegram Group & Channel Scraper — Messages, Media & Stats
Pricing
Pay per usage
Telegram Group & Channel Scraper — Messages, Media & Stats
Scrape public Telegram channels and groups without an API key. Extract messages, views, forwards, media URLs, member counts, and channel metadata. Supports pagination for historical data. Uses multiple fallback strategies (t.me/s/ public preview, web embeds, directory sites) for maximum reliability.
Pricing
Pay per usage
Rating
0.0
(0)
Developer

Ricardo Akiyoshi
Actor stats
0
Bookmarked
8
Total users
3
Monthly active users
9 days ago
Last modified
Categories
Share
Telegram Group & Channel Scraper
Scrape public Telegram channels and groups without an API key or bot token. Extract messages, views, forwards, media URLs, reactions, member counts, and full channel metadata using web-accessible endpoints only.
Features
- No API key required — scrapes via Telegram's public web preview, no authentication needed
- 3 fallback strategies — t.me/s/ preview, web embed widgets, and directory site metadata
- Full message data — text, date, view count, forward count, author, replies, reactions
- Media extraction — photos, videos, documents, stickers, voice messages, polls, locations
- Channel metadata — name, description, member count, avatar, verified status
- Pagination — automatically loads older messages up to your configured limit
- Date filtering — scrape only messages within a specific date range
- Text search — filter messages by keyword
- Multi-channel — scrape multiple channels in a single run
- Reaction data — emoji reactions with counts per message
- Comment counts — reply counts for channels with discussions enabled
- Deduplication — never scrapes or charges for the same message twice
- Rate limit handling — detects Telegram rate limits and retries gracefully
- Proxy support — optional Apify proxy for large-scale scraping
- Pay-per-event — $0.003 per message scraped
Scraping Strategies
The scraper tries three strategies in order for maximum reliability:
Strategy 1: Public Preview (t.me/s/)
The fastest and most data-rich method. Telegram serves a public preview page at t.me/s/channelname that contains server-rendered HTML with messages, media, view counts, and reactions. Supports pagination via the ?before= parameter.
Strategy 2: Web Embed Widget
If the preview page is unavailable, the scraper falls back to fetching individual message embeds at t.me/channelname/123?embed=1. Slower (one request per message) but works for some channels where the preview is blocked.
Strategy 3: Directory Sites (Metadata Fallback)
If neither web method can retrieve messages (e.g., the channel is restricted), the scraper pulls channel metadata (name, description, member count, category) from third-party directories like TGStat and TelegramChannels.me.
Use Cases
Competitive Intelligence
Monitor competitor Telegram channels to track announcements, product launches, pricing changes, and community engagement.
Market Research
Analyze public discussion channels to understand sentiment, trending topics, and user pain points in your market.
Brand Monitoring
Track mentions and discussions about your brand across multiple Telegram channels and groups.
Crypto & Finance
Monitor crypto project announcement channels for token launches, partnership news, and community sentiment.
Content Aggregation
Collect and analyze content from industry channels to identify trends, popular formats, and engagement patterns.
Academic Research
Gather structured data from public Telegram channels for social media research, communication studies, or information flow analysis.
OSINT & Investigations
Extract publicly available information from Telegram channels for open-source intelligence gathering.
Input
| Field | Type | Default | Description |
|---|---|---|---|
channelUrls | Array of strings | (required) | Telegram channel/group URLs or usernames |
maxMessages | Integer | 500 | Max messages per channel (0 = unlimited) |
includeMedia | Boolean | true | Extract media URLs (photos, videos, docs) |
includeComments | Boolean | false | Extract reply/comment counts |
includeReactions | Boolean | true | Extract emoji reaction counts |
dateFrom | String | (empty) | Start date filter (YYYY-MM-DD) |
dateTo | String | (empty) | End date filter (YYYY-MM-DD) |
searchQuery | String | (empty) | Text search filter (case-insensitive) |
maxConcurrency | Integer | 2 | Parallel requests (lower = safer) |
proxyConfiguration | Object | (none) | Apify proxy settings |
Accepted URL Formats
All of these are valid for channelUrls:
https://t.me/channelnamehttps://t.me/s/channelnamet.me/channelname@channelnamechannelname
Input Examples
Scrape a single channel (latest 100 messages)
{"channelUrls": ["https://t.me/durov"],"maxMessages": 100,"includeMedia": true,"includeReactions": true}
Scrape multiple channels with date filter
{"channelUrls": ["https://t.me/telegram","t.me/durov","@TelegramTips"],"maxMessages": 500,"dateFrom": "2025-01-01","dateTo": "2026-03-01","includeComments": true}
Search for specific keywords
{"channelUrls": ["https://t.me/CoinDesk"],"maxMessages": 1000,"searchQuery": "bitcoin","includeMedia": false}
Large-scale scrape with proxies
{"channelUrls": ["https://t.me/channel1","https://t.me/channel2","https://t.me/channel3"],"maxMessages": 5000,"maxConcurrency": 3,"proxyConfiguration": {"useApifyProxy": true,"apifyProxyGroups": ["RESIDENTIAL"]}}
Output
Each message is saved to the default dataset with the following fields:
{"channelName": "Durov's Channel","channelDescription": "My personal channel about technology and privacy.","memberCount": 1250000,"channelUrl": "https://t.me/durov","isVerified": true,"messageId": 312,"text": "We just launched a new feature that allows users to...","date": "2026-02-15T14:30:00.000Z","views": 842000,"forwards": 12500,"author": null,"forwardedFrom": null,"replyToMessageId": null,"mediaUrl": "https://cdn4.telegram-cdn.org/file/abc123.jpg","mediaType": "photo","mediaCaption": null,"reactions": [{ "emoji": "\ud83d\udc4d", "count": 15200 },{ "emoji": "\u2764\ufe0f", "count": 8400 },{ "emoji": "\ud83d\udd25", "count": 3100 }],"commentCount": null,"messageUrl": "https://t.me/durov/312","scrapedAt": "2026-03-01T10:00:00.000Z","scrapingStrategy": "preview"}
Output Fields
| Field | Type | Description |
|---|---|---|
channelName | String | Display name of the channel |
channelDescription | String | Channel bio/description |
memberCount | Integer | Number of subscribers/members |
channelUrl | String | Direct link to the channel |
isVerified | Boolean | Whether the channel has a verified badge |
messageId | Integer | Telegram message ID (unique within channel) |
text | String | Message text content |
date | String | Message date in ISO 8601 format |
views | Integer | Number of views |
forwards | Integer | Number of forwards/shares |
author | String | Message author (for group messages) |
forwardedFrom | String | Original source if message was forwarded |
replyToMessageId | Integer | ID of the message being replied to |
mediaUrl | String | URL of attached media (photo, video, etc.) |
mediaType | String | Type of media: photo, video, document, sticker, audio, poll, etc. |
mediaCaption | String | Caption or title for documents/polls |
reactions | Array | Emoji reactions with counts |
commentCount | Integer | Number of replies/comments |
messageUrl | String | Direct link to the message |
scrapedAt | String | Timestamp when the message was scraped |
scrapingStrategy | String | Which strategy was used (preview, embed, metadata-only) |
Performance Tips
- Start small — test with
maxMessages: 10before scaling to thousands - Use proxies for multiple channels — Telegram rate-limits by IP; residential proxies work best
- Keep concurrency low —
maxConcurrency: 2is recommended for Telegram - Disable media for speed — set
includeMedia: falseif you only need text data - Use date filters — narrow the time range to avoid scraping months of history
- Use search queries — filter messages server-side equivalent to reduce output volume
Rate Limiting
Telegram rate-limits web scraping based on IP address. This actor handles it by:
- Limiting to 20 requests per minute by default
- Adding delays between pagination requests (1.5 seconds)
- Detecting rate limit responses and retrying with backoff
- Supporting proxy rotation via Apify's proxy infrastructure
- Gracefully falling back to alternative strategies when blocked
If you see rate limit warnings in the logs, consider:
- Reducing
maxConcurrencyto 1 - Using Apify residential proxies
- Scraping fewer channels per run
- Adding longer delays between runs
Limitations
- Public channels only — this scraper cannot access private channels or groups that require joining
- No user data — individual user profiles and phone numbers are not scraped
- Media URLs may expire — Telegram CDN URLs have limited validity; download media promptly
- View counts approximate — Telegram shows rounded numbers (e.g., "1.2K" instead of "1,234")
- Embed strategy is slower — fetching one message at a time is 10-20x slower than the preview method
- No real-time streaming — this is a batch scraper, not a live monitor
Pricing (Pay Per Event)
This actor uses Apify's pay-per-event pricing model. You are charged $0.003 for each message successfully scraped and saved to the dataset. Channel metadata extraction is free.
Example costs:
- 100 messages = $0.30
- 1,000 messages = $3.00
- 10,000 messages = $30.00
Legal Notice
This actor scrapes only publicly accessible Telegram content through web endpoints. No authentication bypass or private data access is performed. Users are responsible for ensuring their use of this tool complies with Telegram's Terms of Service and all applicable laws and regulations. This tool is designed for legitimate research, monitoring, and analysis purposes.
Integration — Python
from apify_client import ApifyClientclient = ApifyClient("YOUR_API_TOKEN")run = client.actor("sovereigntaylor/telegram-scraper").call(run_input={"searchTerm": "telegram","maxResults": 50})for item in client.dataset(run["defaultDatasetId"]).iterate_items():print(f"{item.get('title', item.get('name', 'N/A'))}")
Integration — JavaScript
import { ApifyClient } from 'apify-client';const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });const run = await client.actor('sovereigntaylor/telegram-scraper').call({searchTerm: 'telegram',maxResults: 50});const { items } = await client.dataset(run.defaultDatasetId).listItems();items.forEach(item => console.log(item.title || item.name || 'N/A'));