Free Dictionary Scraper
Pricing
Pay per usage
Free Dictionary Scraper
Definitions, phonetics, pronounciations, parts of speech, examples, synonyms.
Pricing
Pay per usage
Rating
0.0
(0)
Developer

Salman Bareesh
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
2 minutes ago
Last modified
Categories
Share
Definitions, phonetics, pronounciations, parts of speech, examples, synonyms Returns clean, structured JSON records ready for analysis, automation, or integration with other tools.
What You Get
- Word definitions
- Pronunciation phonetics
- Example sentences
- Part of speech and etymology
All results are returned as structured JSON objects — no parsing or cleanup required.
Quick Start
Click Run with default settings — no configuration needed. The actor works out of the box.
{"maxResults": 100,"searchQuery": ""}
Or search for specific data:
{"searchQuery": "example","maxResults": 100}
Input Options
| Field | Default | Description |
|---|---|---|
searchQuery | "" | Keyword or phrase to filter results. Leave empty to browse all records. |
maxResults | 100 | Maximum records to retrieve (1–10,000). Increase for bulk exports. |
Output Format
Results are pushed to the Apify Dataset as individual JSON records. Each run also saves a summary to the Key-Value Store under OUTPUT:
{"totalResults": 100,"fetchedAt": "2025-01-01T00:00:00.000Z"}
Dataset records contain the raw structured data returned by the source, including fields like:
- Word definitions
- Pronunciation phonetics
- Example sentences
Use Cases
- Language learning apps
- Writing assistants
- Educational platforms
- Vocabulary builders
Pricing
$1.00 per 1,000 results — pay only for what you use. Pricing is based on the number of records pushed to the dataset.
| Results | Estimated Cost |
|---|---|
| 100 | $0.10 |
| 1,000 | $1.00 |
| 10,000 | $10.00 |
Notes
- Results are pushed to the dataset in real time as they are fetched
- The actor automatically retries failed requests up to 3 times
- Rate limiting is handled gracefully with built-in delays
- Run multiple times safely — each run creates a fresh dataset