Registered Antennas scraper from keraies.eett.gr
Pricing
$5.00/month + usage
Registered Antennas scraper from keraies.eett.gr
registered antennas in the greek territory. It works slow and low. It is an ethical effort in the national goverment cloud. It will take almost 10hours to finish. Some patience. 24.000antennas approximately (2025) Every 50 antennas you get an output. Please be patient.
Pricing
$5.00/month + usage
Rating
0.0
(0)
Developer

Michalis Paignigiannis
Actor stats
0
Bookmarked
3
Total users
1
Monthly active users
a month ago
Last modified
Categories
Share
Greek Antenna Scraper
This Actor scrapes antenna installation data from the Greek National Commission for Telecommunications and Post (EETT) website at https://keraies.eett.gr.
What does this Actor do?
The Greek Antenna Scraper collects detailed information about telecommunications antennas installed across Greek municipalities, including:
- Antenna locations (address, coordinates)
- Company/operator information
- Permit status
- Technical specifications
- Associated documents and permits
Input
The Actor accepts the following input parameters:
- municipalities (array of strings, required): List of Greek municipalities to scrape. Municipality names must be in Greek (e.g., "Αθηναίων", "Θεσσαλονίκης")
- max_antennas (integer, optional): Maximum number of antennas to scrape. Default is 50. Useful for testing or limiting costs.
Example Input
{"municipalities": ["Αθηναίων","Θεσσαλονίκης","Αλεξανδρούπολης"],"max_antennas": 100}
Output
The Actor outputs a dataset with the following fields for each antenna:
antenna_id: Unique identifierserial_number: Serial numbercode: Antenna codecategory: Antenna categorycompany: Operating companyaddress: Installation addressmunicipality: Municipality nameregion: Region nameposition_code: Position codecode_name: Code namelatitude: Geographic latitude (WGS84)longitude: Geographic longitude (WGS84)permit_status: Current permit statusmeasurements_eaee: EAEE measurementsdocuments: Array of associated documents with protocol numbers and file URLs
Example Output
{"antenna_id": "12345","serial_number": "001","code": "ATH001","category": "Mobile Telephony","company": "Example Telecom","address": "Example Street 1, Athens","municipality": "Αθηναίων","region": "Αττική","position_code": 1001,"latitude": 37.9838,"longitude": 23.7275,"permit_status": "Approved","documents": [{"document_number": "DOC123","protocol_number": "PROT456","type": "Permit","file_url": "https://example.com/doc.pdf"}]}
Performance
- Speed: Approximately 1-2 seconds per antenna (including detailed information retrieval)
- Default run: With 50 antennas (default), the Actor completes in under 5 minutes
- Large runs: Scraping all municipalities can take several hours depending on the limit set
Use Cases
- Telecommunications Research: Analyze antenna distribution across Greece
- Real Estate Analysis: Identify properties near telecommunications infrastructure
- Regulatory Compliance: Track permits and documentation
- Geographic Analysis: Map telecommunications coverage
- Market Research: Understand operator presence by region
Notes
- The Actor respects the source website with appropriate delays between requests
- Municipality names must be provided in Greek characters
- Some antennas may have incomplete data depending on source availability
- The Actor uses a two-stage process: first collecting antenna IDs, then fetching detailed information
Support
For issues or questions, please contact the Actor developer or use the Apify platform support.
Python BeautifulSoup template
A template for web scraping data from websites enqueued from starting URL using Python. The URL of the web page is passed in via input, which is defined by the input schema. The template uses the HTTPX to get the HTML of the page and the Beautiful Soup to parse the data from it. Enqueued URLs are available in request queue. The data are then stored in a dataset where you can easily access them.
Included features
- Apify SDK for Python - a toolkit for building Apify Actors and scrapers in Python
- Input schema - define and easily validate a schema for your Actor's input
- Request queue - queues into which you can put the URLs you want to scrape
- Dataset - store structured data where each object stored has the same attributes
- HTTPX - library for making asynchronous HTTP requests in Python
- Beautiful Soup - a Python library for pulling data out of HTML and XML files
How it works
This code is a Python script that uses HTTPX and Beautiful Soup to scrape web pages and extract data from them. Here's a brief overview of how it works:
- The script reads the input data from the Actor instance, which is expected to contain a
start_urlskey with a list of URLs to scrape and amax_depthkey with the maximum depth of nested links to follow. - The script enqueues the starting URLs in the default request queue and sets their depth to 0.
- The script processes the requests in the queue one by one, fetching the URL using HTTPX and parsing it using BeautifulSoup.
- If the depth of the current request is less than the maximum depth, the script looks for nested links in the page and enqueues their targets in the request queue with an incremented depth.
- The script extracts the desired data from the page (in this case, all the links) and pushes it to the default dataset using the
push_datamethod of the Actor instance. - The script catches any exceptions that occur during the scraping process and logs an error message using the
Actor.log.exceptionmethod. - This code demonstrates how to use Python and the Apify SDK to scrape web pages and extract specific data from them.
Resources
- BeautifulSoup Scraper
- Beautifulsoup Scraper tutorial
- Python tutorials in Academy
- Web scraping with Beautiful Soup and Requests
- Beautiful Soup vs. Scrapy for web scraping
- Integration with Make, GitHub, Zapier, Google Drive, and other apps
- Video guide on getting scraped data using Apify API
- Video introduction to Python SDK
- A short guide on how to build web scrapers using code templates:
Getting started
For complete information see this article. In short, you will:
- Build the Actor
- Run the Actor
Pull the Actor for local development
If you would like to develop locally, you can pull the existing Actor from Apify console using Apify CLI:
-
Install
apify-cliUsing Homebrew
$brew install apify-cliUsing NPM
$npm -g install apify-cli -
Pull the Actor by its unique
<ActorId>, which is one of the following:- unique name of the Actor to pull (e.g. "apify/hello-world")
- or ID of the Actor to pull (e.g. "E2jjCZBezvAZnX8Rb")
You can find both by clicking on the Actor title at the top of the page, which will open a modal containing both Actor unique name and Actor ID.
This command will copy the Actor into the current directory on your local machine.
$apify pull <ActorId>
Documentation reference
To learn more about Apify and Actors, take a look at the following resources:


