Google Ads Competitor Term Scraper
Under maintenance
Pricing
$10.00/month + usage
Google Ads Competitor Term Scraper
Under maintenance
Generate a list of registered US businesses for a specific niche and location. Use it to create robust negative keyword lists of competitors for Google Ads campaigns. The Actor takes the following inputs: - location, e.g., city, state or zip code - category, e.g., "assisted living facilities"
0.0 (0)
Pricing
$10.00/month + usage
0
Total users
2
Monthly users
2
Last modified
7 days ago
US Business Finder
This Apify Actor finds registered US businesses by specifying a location and a category. It uses puppeteer to scrape data from the Better Business Bureau (BBB) website.
Input
The Actor accepts the following input parameters:
location
(required): Location to search for businesses (e.g., city, state or zip code)category
(required): Type of business to search for (e.g., 'assisted living facilities', 'vocational school')maxResults
(optional): Maximum number of results to return (default: 100)
Output
The Actor outputs a dataset with the following structure for each business:
1{ 2 "name": "Business Name", 3 "url": "Business URL", 4 "categories": "Business Categories", 5 "phone": "Phone Number", 6 "address": "Business Address", 7 "rating": "Rating", 8 "isAccredited": true/false, 9 "serviceArea": "Service Area" 10}
Usage
You can run the Actor on the Apify platform or locally using the Apify CLI.
Running on Apify
-
Push your code to GitHub:
1git init 2git add . 3git commit -m "Initial Apify Actor commit" 4git remote add origin YOUR_GITHUB_REPO_URL 5git push -u origin main
-
Go to Apify Console
-
Click on "Develop new" > "From existing source code"
-
Choose GitHub as the source and select your repository
-
Fill in the Actor details and click "Create"
-
Once the Actor is created, you can run it with your desired input parameters
Running locally
- Install the Apify CLI:
npm install -g apify-cli
- Install dependencies:
npm install
- Run the Actor:
apify run --input-file INPUT.json
Customization
You can modify the scraping logic in src/main.js
to extract additional data or change the filtering criteria as needed. The Actor uses puppeteer for web scraping, which allows for extensive customization.
Limitations
The Actor respects the website's robots.txt file and follows ethical scraping practices. It includes:
- Reasonable delays between requests
- User-agent identification
- Handling of errors gracefully
Performance
The Actor can handle large numbers of results by paginating through search results and processing them in batches. The maxResults
parameter can be used to limit the number of results returned.