Google News Scraper
Pricing
Pay per usage
Google News Scraper
Scrape Google News search results for any topic. Get headlines, sources, publication dates, and article snippets from news.google.com. No API key required.
What This Actor Does
This Apify actor fetches structured data and delivers clean, normalized JSON results ready for analysis, integration, or storage. It handles pagination, rate limiting, error recovery, and data normalization automatically so you can focus on using the data rather than collecting it.
Use Cases
- Market research - Collect and analyze competitive data at scale for business intelligence
- Data pipeline automation - Feed structured data into your ETL pipelines, data warehouses, or analytics platforms
- Lead generation - Build targeted prospect lists from publicly available data sources
- Price monitoring - Track pricing changes and trends over time with scheduled runs
- Academic research - Gather large datasets for quantitative analysis and research papers
- Business intelligence - Create dashboards and reports from fresh, structured data
Input Parameters
- maxResults (integer) - Maximum number of results to return (default: 50)
- query (string) - News search query (default: artificial intelligence)
- proxy (object) - Proxy configuration for web requests
Example Input
{"maxResults": 5,"query": "artificial intelligence"}
Example Output
{"headline": "example_headline","source": "example_source","publishedAt": "example_publishedAt","url": "example_url","snippet": "example_snippet","scrapedAt": "2026-02-28T12:00:00.000Z"}
Pricing
Pay-per-event pricing. You only pay for what you use:
- $0.10 per actor start
- $0.0003 per dataset item
Example: Fetching 100 items costs $0.10 + 0.03 = $0.13 total.
Integrations
This actor works with all Apify platform integrations:
- API - Call programmatically from any language
- Webhooks - Get notified when runs complete
- Zapier & Make - Connect to 5,000+ apps
- Google Sheets - Export directly to spreadsheets
- Slack - Send notifications to channels
- GitHub Actions - Trigger from CI/CD pipelines
Schedule runs hourly, daily, or weekly to build historical datasets automatically.
FAQ
How fresh is the data? Data is collected in real-time with every actor run. You always get the latest available data from the source.
Can I schedule regular data collection? Yes, use Apify Schedules to run this actor on any interval (hourly, daily, weekly) and automatically store results.
What happens if the source is temporarily unavailable? The actor includes retry logic and will attempt to recover from transient errors. If the source is down, the actor will report the error clearly.
How do I export the data? Results are stored in Apify datasets which can be exported as JSON, CSV, XML, or Excel. You can also access data via the Apify API.
Tips
- Start with a small
maxResultsvalue to test your configuration before running large jobs - Use proxy configuration for scraper-type actors to avoid rate limiting
- Schedule regular runs to build time-series datasets for trend analysis
- Combine multiple actors using Apify orchestration for complex data pipelines
