🏛️ Federal Register Scraper avatar

🏛️ Federal Register Scraper

Pricing

from $9.00 / 1,000 results

Go to Apify Store
🏛️ Federal Register Scraper

🏛️ Federal Register Scraper

Extract daily government filings, proposed rules, and agency decisions from the Federal Register to build custom regulatory watchlists.

Pricing

from $9.00 / 1,000 results

Rating

0.0

(0)

Developer

太郎 山田

太郎 山田

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

1

Monthly active users

11 days ago

Last modified

Share

Federal Register Digest | Agency Rule & Notice Monitor

Transform how your organization tracks US government activity with this dedicated Federal Register monitor. Navigating federal regulatory feeds manually is a massive drain on resources, but this scraper allows you to automate the entire extraction process. By targeting official government websites directly, the tool pulls down daily or weekly updates regarding newly proposed rules, final agency decisions, and critical public notices.

Designed for data teams, policy analysts, and operations managers, this tool lets you set up highly specific watchlist parameters. You can run the scraper on a schedule to constantly search for exact keywords, tracking specific agencies or monitoring high-priority dockets. When the data is scraped, it strips away the unstructured noise of the web pages and delivers pristine, machine-readable results. Every execution captures crucial details: the exact filing date, the responsible agency, unique docket numbers, the specific document type, and raw source URLs.

Integrating this API into your existing workflows means you can automatically push real-time alerts to Slack, populate analytics warehouses, or power custom compliance tools. If you need a reliable method to extract hundreds of daily regulatory announcements or want to compile a deep historical archive of federal rules, this monitor ensures you have continuous, programmatic access to the data that matters most. Avoid missing crucial compliance deadlines by letting the scraper handle the heavy lifting.

Store Quickstart

Run this actor with your target input. Results appear in the Apify Dataset and can be piped to webhooks for real-time delivery. Use dryRun to validate before committing to a schedule.

Key Features

  • 🏛️ Government-sourced — Pulls directly from official agency feeds — no third-party aggregators
  • ⏱️ Timely digests — Daily/weekly rollups of new filings, rulings, or actions
  • 🔍 Keyword watchlists — Flag items matching your compliance/legal watch terms
  • 📊 Structured metadata — Agency, date, docket, document type, link — all dataset-ready
  • 📡 Webhook alerts — Push to legal/compliance teams the moment new items match watchlist

Use Cases

WhoWhy
DevelopersAutomate recurring data fetches without building custom scrapers
Data teamsPipe structured output into analytics warehouses
Ops teamsMonitor changes via webhook alerts
Product managersTrack competitor/market signals without engineering time

Input

FieldTypeDefaultDescription
feedsarrayrequiredOne entry per agency/topic watch target. Each feed produces one summary digest row. Set agencySlug and documentTypes to
watchTermsstringKeywords, company names, or regulatory topics to flag in document titles and abstracts. Matching documents receive a wat
lookbackDaysinteger7Fetch documents published within this many days. Use 7–14 for recurring scheduled runs; widen to 30+ for initial discove
maxDocsPerFeedinteger50Upper bound on documents fetched per feed per run. Increase for broad discovery; keep low (50) for fast recurring digest
maxPagesPerFeedinteger5Hard page cap per feed to prevent runaway pagination. Each page fetches up to 100 documents.
deliverystring"dataset"dataset stores results in the Apify dataset. webhook posts the digest JSON to webhookUrl.
webhookUrlstringPOST target for the digest payload. Leave empty for dataset delivery.
datasetModestring"all"all emits every feed digest row. action_needed emits only feeds with watch-term hits. new_only emits only feeds with doc

Input Example

{
"lookbackDays": 7,
"maxDocsPerFeed": 50,
"maxPagesPerFeed": 5,
"delivery": "dataset",
"datasetMode": "all",
"snapshotKey": "federal-register-digest-state",
"federalRegisterApiUrl": "https://www.federalregister.gov/api/v1/documents.json",
"requestTimeoutSeconds": 30,
"notifyOnNoNew": true,
"dryRun": false
}

Output

FieldTypeDescription
metaobject
errorsarray
digestsarray
digests[].feedIdstring
digests[].feedNamestring
digests[].agencySlugsarray
digests[].documentTypesarray
digests[].checkedAttimestamp
digests[].statusstring
digests[].newDocCountnumber
digests[].totalDocCountnumber
digests[].changedSinceLastRunboolean
digests[].actionNeededboolean
digests[].recommendedActionstring
digests[].signalTagsarray
digests[].watchTermHitsarray
digests[].topDocTypesobject
digests[].documentsarray
digests[].errornull

Output Example

{
"meta": {
"generatedAt": "2024-02-15T09:00:00.000Z",
"now": "2024-02-15T09:00:00.000Z",
"lookbackDays": 7,
"feedCount": 2,
"totalDocs": 5,
"newDocs": 4,
"watchTermHitCount": 2,
"actionNeededCount": 1,
"snapshot": {
"key": "federal-register-digest-sample",
"loadedFrom": "local",
"savedTo": "local"
},
"warnings": [],
"executiveSummary": {
"overallStatus": "action_needed",
"brief": "1 feed(s) have watch-term hits requiring review.",
"watchTermHits": [
{
"term": "climate",
"docNumber": "2024-02974",
"title": "National Ambient Air Quality Standards for Particulate Matter",
"docType": "RULE",
"primaryAgency": "Environmental Protection Agency",
"publicationDate": "2024-02-07T00:00:00.000Z",
"htmlUrl": "https://www.federalregister.gov/documents/2024/02/07/2024-02974/national-ambient-air-quality-standards"
},
{
"term": "greenhouse",
"docNumber": "2024-02345",
"title": "Greenhouse Gas Reporting: Additions and Revisions",
"docType": "PRORULE",
"primaryAgency": "Environmental Protection Agency",
"publicationDate": "2024-02-12T00:00:00.000Z",
"htmlUrl": "https://www.federalregister.gov/documents/2024/02/12/2024-02345/greenhouse-gas-reporting"
}
],
"actionItems": [

API Usage

Run this actor programmatically using the Apify API. Replace YOUR_API_TOKEN with your token from Apify Console → Settings → Integrations.

cURL

curl -X POST "https://api.apify.com/v2/acts/taroyamada~federal-register-digest/run-sync-get-dataset-items?token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{ "lookbackDays": 7, "maxDocsPerFeed": 50, "maxPagesPerFeed": 5, "delivery": "dataset", "datasetMode": "all", "snapshotKey": "federal-register-digest-state", "federalRegisterApiUrl": "https://www.federalregister.gov/api/v1/documents.json", "requestTimeoutSeconds": 30, "notifyOnNoNew": true, "dryRun": false }'

Python

from apify_client import ApifyClient
client = ApifyClient("YOUR_API_TOKEN")
run = client.actor("taroyamada/federal-register-digest").call(run_input={
"lookbackDays": 7,
"maxDocsPerFeed": 50,
"maxPagesPerFeed": 5,
"delivery": "dataset",
"datasetMode": "all",
"snapshotKey": "federal-register-digest-state",
"federalRegisterApiUrl": "https://www.federalregister.gov/api/v1/documents.json",
"requestTimeoutSeconds": 30,
"notifyOnNoNew": true,
"dryRun": false
})
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
print(item)

JavaScript / Node.js

import { ApifyClient } from 'apify-client';
const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });
const run = await client.actor('taroyamada/federal-register-digest').call({
"lookbackDays": 7,
"maxDocsPerFeed": 50,
"maxPagesPerFeed": 5,
"delivery": "dataset",
"datasetMode": "all",
"snapshotKey": "federal-register-digest-state",
"federalRegisterApiUrl": "https://www.federalregister.gov/api/v1/documents.json",
"requestTimeoutSeconds": 30,
"notifyOnNoNew": true,
"dryRun": false
});
const { items } = await client.dataset(run.defaultDatasetId).listItems();
console.log(items);

Tips & Limitations

  • Run daily for active watchlists; weekly for passive monitoring.
  • Webhook delivery works well for compliance team Slack channels — include docket URL for 1-click access.
  • Use watchKeywords generously — false positives are cheap to triage, false negatives miss filings.
  • Pair with regulatory-change-monitor for cross-agency coverage.
  • Archive Dataset rows weekly for long-term compliance evidence retention.

FAQ

How far back does history go?

This actor monitors forward-only — new items since first run. For historical data, use the agency's own search tool.

What timezones are used?

All timestamps are UTC. Use your downstream pipeline to convert to agency-local time if needed.

Does it translate non-English content?

No — original language is preserved. Use downstream translation services if needed.

Is the data official?

Yes — sourced directly from official government websites and feeds. Not a third-party aggregator.

Can I use this for legal research?

For alerting and monitoring, yes. For litigation research, cross-verify with primary sources (agency websites) — this actor is a monitoring tool, not a legal database.

Government & Regulatory cluster — explore related Apify tools:

Cost

Pay Per Event:

  • actor-start: $0.01 (flat fee per run)
  • dataset-item: $0.003 per output item

Example: 1,000 items = $0.01 + (1,000 × $0.003) = $3.01

No subscription required — you only pay for what you use.

⭐ Was this helpful?

If this actor saved you time, please leave a ★ rating on Apify Store. It takes 10 seconds, helps other developers discover it, and keeps updates free.

Bug report or feature request? Open an issue on the Issues tab of this actor.