Telegram Group & Channel Scraper — Messages, Media & Stats avatar

Telegram Group & Channel Scraper — Messages, Media & Stats

Pricing

Pay per usage

Go to Apify Store
Telegram Group & Channel Scraper — Messages, Media & Stats

Telegram Group & Channel Scraper — Messages, Media & Stats

Scrape public Telegram channels and groups without an API key. Extract messages, views, forwards, media URLs, member counts, and channel metadata. Supports pagination for historical data. Uses multiple fallback strategies (t.me/s/ public preview, web embeds, directory sites) for maximum reliability.

Pricing

Pay per usage

Rating

0.0

(0)

Developer

Ricardo Akiyoshi

Ricardo Akiyoshi

Maintained by Community

Actor stats

0

Bookmarked

8

Total users

3

Monthly active users

9 days ago

Last modified

Categories

Share

Telegram Group & Channel Scraper

Scrape public Telegram channels and groups without an API key or bot token. Extract messages, views, forwards, media URLs, reactions, member counts, and full channel metadata using web-accessible endpoints only.

Features

  • No API key required — scrapes via Telegram's public web preview, no authentication needed
  • 3 fallback strategies — t.me/s/ preview, web embed widgets, and directory site metadata
  • Full message data — text, date, view count, forward count, author, replies, reactions
  • Media extraction — photos, videos, documents, stickers, voice messages, polls, locations
  • Channel metadata — name, description, member count, avatar, verified status
  • Pagination — automatically loads older messages up to your configured limit
  • Date filtering — scrape only messages within a specific date range
  • Text search — filter messages by keyword
  • Multi-channel — scrape multiple channels in a single run
  • Reaction data — emoji reactions with counts per message
  • Comment counts — reply counts for channels with discussions enabled
  • Deduplication — never scrapes or charges for the same message twice
  • Rate limit handling — detects Telegram rate limits and retries gracefully
  • Proxy support — optional Apify proxy for large-scale scraping
  • Pay-per-event — $0.003 per message scraped

Scraping Strategies

The scraper tries three strategies in order for maximum reliability:

Strategy 1: Public Preview (t.me/s/)

The fastest and most data-rich method. Telegram serves a public preview page at t.me/s/channelname that contains server-rendered HTML with messages, media, view counts, and reactions. Supports pagination via the ?before= parameter.

Strategy 2: Web Embed Widget

If the preview page is unavailable, the scraper falls back to fetching individual message embeds at t.me/channelname/123?embed=1. Slower (one request per message) but works for some channels where the preview is blocked.

Strategy 3: Directory Sites (Metadata Fallback)

If neither web method can retrieve messages (e.g., the channel is restricted), the scraper pulls channel metadata (name, description, member count, category) from third-party directories like TGStat and TelegramChannels.me.

Use Cases

Competitive Intelligence

Monitor competitor Telegram channels to track announcements, product launches, pricing changes, and community engagement.

Market Research

Analyze public discussion channels to understand sentiment, trending topics, and user pain points in your market.

Brand Monitoring

Track mentions and discussions about your brand across multiple Telegram channels and groups.

Crypto & Finance

Monitor crypto project announcement channels for token launches, partnership news, and community sentiment.

Content Aggregation

Collect and analyze content from industry channels to identify trends, popular formats, and engagement patterns.

Academic Research

Gather structured data from public Telegram channels for social media research, communication studies, or information flow analysis.

OSINT & Investigations

Extract publicly available information from Telegram channels for open-source intelligence gathering.

Input

FieldTypeDefaultDescription
channelUrlsArray of strings(required)Telegram channel/group URLs or usernames
maxMessagesInteger500Max messages per channel (0 = unlimited)
includeMediaBooleantrueExtract media URLs (photos, videos, docs)
includeCommentsBooleanfalseExtract reply/comment counts
includeReactionsBooleantrueExtract emoji reaction counts
dateFromString(empty)Start date filter (YYYY-MM-DD)
dateToString(empty)End date filter (YYYY-MM-DD)
searchQueryString(empty)Text search filter (case-insensitive)
maxConcurrencyInteger2Parallel requests (lower = safer)
proxyConfigurationObject(none)Apify proxy settings

Accepted URL Formats

All of these are valid for channelUrls:

  • https://t.me/channelname
  • https://t.me/s/channelname
  • t.me/channelname
  • @channelname
  • channelname

Input Examples

Scrape a single channel (latest 100 messages)

{
"channelUrls": ["https://t.me/durov"],
"maxMessages": 100,
"includeMedia": true,
"includeReactions": true
}

Scrape multiple channels with date filter

{
"channelUrls": [
"https://t.me/telegram",
"t.me/durov",
"@TelegramTips"
],
"maxMessages": 500,
"dateFrom": "2025-01-01",
"dateTo": "2026-03-01",
"includeComments": true
}

Search for specific keywords

{
"channelUrls": ["https://t.me/CoinDesk"],
"maxMessages": 1000,
"searchQuery": "bitcoin",
"includeMedia": false
}

Large-scale scrape with proxies

{
"channelUrls": [
"https://t.me/channel1",
"https://t.me/channel2",
"https://t.me/channel3"
],
"maxMessages": 5000,
"maxConcurrency": 3,
"proxyConfiguration": {
"useApifyProxy": true,
"apifyProxyGroups": ["RESIDENTIAL"]
}
}

Output

Each message is saved to the default dataset with the following fields:

{
"channelName": "Durov's Channel",
"channelDescription": "My personal channel about technology and privacy.",
"memberCount": 1250000,
"channelUrl": "https://t.me/durov",
"isVerified": true,
"messageId": 312,
"text": "We just launched a new feature that allows users to...",
"date": "2026-02-15T14:30:00.000Z",
"views": 842000,
"forwards": 12500,
"author": null,
"forwardedFrom": null,
"replyToMessageId": null,
"mediaUrl": "https://cdn4.telegram-cdn.org/file/abc123.jpg",
"mediaType": "photo",
"mediaCaption": null,
"reactions": [
{ "emoji": "\ud83d\udc4d", "count": 15200 },
{ "emoji": "\u2764\ufe0f", "count": 8400 },
{ "emoji": "\ud83d\udd25", "count": 3100 }
],
"commentCount": null,
"messageUrl": "https://t.me/durov/312",
"scrapedAt": "2026-03-01T10:00:00.000Z",
"scrapingStrategy": "preview"
}

Output Fields

FieldTypeDescription
channelNameStringDisplay name of the channel
channelDescriptionStringChannel bio/description
memberCountIntegerNumber of subscribers/members
channelUrlStringDirect link to the channel
isVerifiedBooleanWhether the channel has a verified badge
messageIdIntegerTelegram message ID (unique within channel)
textStringMessage text content
dateStringMessage date in ISO 8601 format
viewsIntegerNumber of views
forwardsIntegerNumber of forwards/shares
authorStringMessage author (for group messages)
forwardedFromStringOriginal source if message was forwarded
replyToMessageIdIntegerID of the message being replied to
mediaUrlStringURL of attached media (photo, video, etc.)
mediaTypeStringType of media: photo, video, document, sticker, audio, poll, etc.
mediaCaptionStringCaption or title for documents/polls
reactionsArrayEmoji reactions with counts
commentCountIntegerNumber of replies/comments
messageUrlStringDirect link to the message
scrapedAtStringTimestamp when the message was scraped
scrapingStrategyStringWhich strategy was used (preview, embed, metadata-only)

Performance Tips

  1. Start small — test with maxMessages: 10 before scaling to thousands
  2. Use proxies for multiple channels — Telegram rate-limits by IP; residential proxies work best
  3. Keep concurrency lowmaxConcurrency: 2 is recommended for Telegram
  4. Disable media for speed — set includeMedia: false if you only need text data
  5. Use date filters — narrow the time range to avoid scraping months of history
  6. Use search queries — filter messages server-side equivalent to reduce output volume

Rate Limiting

Telegram rate-limits web scraping based on IP address. This actor handles it by:

  • Limiting to 20 requests per minute by default
  • Adding delays between pagination requests (1.5 seconds)
  • Detecting rate limit responses and retrying with backoff
  • Supporting proxy rotation via Apify's proxy infrastructure
  • Gracefully falling back to alternative strategies when blocked

If you see rate limit warnings in the logs, consider:

  • Reducing maxConcurrency to 1
  • Using Apify residential proxies
  • Scraping fewer channels per run
  • Adding longer delays between runs

Limitations

  • Public channels only — this scraper cannot access private channels or groups that require joining
  • No user data — individual user profiles and phone numbers are not scraped
  • Media URLs may expire — Telegram CDN URLs have limited validity; download media promptly
  • View counts approximate — Telegram shows rounded numbers (e.g., "1.2K" instead of "1,234")
  • Embed strategy is slower — fetching one message at a time is 10-20x slower than the preview method
  • No real-time streaming — this is a batch scraper, not a live monitor

Pricing (Pay Per Event)

This actor uses Apify's pay-per-event pricing model. You are charged $0.003 for each message successfully scraped and saved to the dataset. Channel metadata extraction is free.

Example costs:

  • 100 messages = $0.30
  • 1,000 messages = $3.00
  • 10,000 messages = $30.00

This actor scrapes only publicly accessible Telegram content through web endpoints. No authentication bypass or private data access is performed. Users are responsible for ensuring their use of this tool complies with Telegram's Terms of Service and all applicable laws and regulations. This tool is designed for legitimate research, monitoring, and analysis purposes.

Integration — Python

from apify_client import ApifyClient
client = ApifyClient("YOUR_API_TOKEN")
run = client.actor("sovereigntaylor/telegram-scraper").call(run_input={
"searchTerm": "telegram",
"maxResults": 50
})
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
print(f"{item.get('title', item.get('name', 'N/A'))}")

Integration — JavaScript

import { ApifyClient } from 'apify-client';
const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });
const run = await client.actor('sovereigntaylor/telegram-scraper').call({
searchTerm: 'telegram',
maxResults: 50
});
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach(item => console.log(item.title || item.name || 'N/A'));