⚖️ Campaign Finance Scraper
Pricing
from $10.00 / 1,000 results
⚖️ Campaign Finance Scraper
Track federal campaign filings and lobbying activity to ensure compliance. Extract docket numbers, filing dates, and direct URLs for your watchlist.
Pricing
from $10.00 / 1,000 results
Rating
0.0
(0)
Developer
太郎 山田
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
11 days ago
Last modified
Categories
Share
Campaign Finance & Lobbying Digest | FEC + LDA Watch
Monitor official FEC OpenFEC committee reports and LDA.gov filings with summary-first output — one digest row per committee or lobbying watch feed, plus change detection, action-needed flags, and nested evidence.
Store Quickstart
Run this actor with your target input. Results appear in the Apify Dataset and can be piped to webhooks for real-time delivery. Use dryRun to validate before committing to a schedule.
Key Features
- 🏛️ Government-sourced — Pulls directly from official agency feeds — no third-party aggregators
- ⏱️ Timely digests — Daily/weekly rollups of new filings, rulings, or actions
- 🔍 Keyword watchlists — Flag items matching your compliance/legal watch terms
- 📊 Structured metadata — Agency, date, docket, document type, link — all dataset-ready
- 📡 Webhook alerts — Push to legal/compliance teams the moment new items match watchlist
Use Cases
| Who | Why |
|---|---|
| Developers | Automate recurring data fetches without building custom scrapers |
| Data teams | Pipe structured output into analytics warehouses |
| Ops teams | Monitor changes via webhook alerts |
| Product managers | Track competitor/market signals without engineering time |
Input
| Field | Type | Default | Description |
|---|---|---|---|
| feeds | array | prefilled | One digest row is emitted per configured feed. Use lda_filings for anonymous/no-key quickstarts. Use fec_committee when |
| watchTerms | string | — | Comma- or newline-separated terms. Matching committees, clients, issues, or agencies elevate digests to action_needed. |
| lookbackDays | integer | 45 | Default lookback window used when a feed-specific override is not provided. |
| maxEvidencePerFeed | integer | 25 | Upper bound on normalized evidence items retained for each feed. |
| maxPagesPerFeed | integer | 2 | Upper bound on paginated API requests per feed during one run. |
| delivery | string | "dataset" | dataset writes digest rows to the Apify dataset. webhook POSTs the digest payload to webhookUrl. |
| webhookUrl | string | — | POST target used when delivery=webhook. |
| datasetMode | string | "all" | Choose whether dataset/webhook delivery includes all digests, only action-needed digests, or only feeds with new evidenc |
Input Example
{"feeds": [{"id": "google-lobbying","name": "Google Lobbying Filings","source": "lda_filings","clientName": "Google"}],"lookbackDays": 45,"maxEvidencePerFeed": 25,"maxPagesPerFeed": 2,"delivery": "dataset","datasetMode": "all","snapshotKey": "campaign-finance-lobbying-digest-state","requestTimeoutSeconds": 30,"notifyOnNoNew": true,"dryRun": false}
Output
| Field | Type | Description |
|---|---|---|
meta | object | |
errors | array | |
digests | array | |
digests[].feedId | string | |
digests[].feedName | string | |
digests[].source | string | |
digests[].checkedAt | timestamp | |
digests[].status | string | |
digests[].newEvidenceCount | number | |
digests[].totalEvidenceCount | number | |
digests[].latestEvidenceAt | timestamp | |
digests[].changedSinceLastRun | boolean | |
digests[].changeReasons | array | |
digests[].actionNeeded | boolean | |
digests[].recommendedAction | string | |
digests[].trackedEntity | object | |
digests[].summaryMetrics | object | |
digests[].signalTags | array | |
digests[].watchTermHits | array | |
digests[].highlights | array | |
digests[].evidence | array | |
digests[].error | null |
Output Example
{"meta": {"generatedAt": "2026-04-06T11:48:58.577Z","now": "2026-04-25T00:00:00.000Z","feedCount": 2,"totalEvidenceCount": 2,"newEvidenceCount": 2,"actionNeededCount": 2,"errorCount": 0,"snapshot": {"key": "campaign-finance-sample","loadedFrom": "local","savedTo": "local"},"warnings": [],"executiveSummary": {"overallStatus": "action_needed","brief": "2 feed(s) require review based on new filings, amendments, or watch-term hits.","actionItems": ["Review 2 action-needed feed(s): Clean Energy PAC, Solar Lobbying Feed","Assess watch-term hits: Solar","2 new filing(s) arrived — capture material committee or lobbying changes before the next cycle review"],"watchTermHits": [{"term": "Solar","evidenceId": "lda:lda-1002","title": "LD-1 Amendment • Policy Advocates LLC / Solar Storage Coalition","filedAt": "2026-04-18T00:00:00.000Z","target": "Policy Advocates LLC / Solar Storage Coalition","url": "https://lda.senate.gov/filings/public/lda-1002"}],"feedStatuses": [{"feedId": "clean-energy-pac","feedName": "Clean Energy PAC","source": "fec_committee","status": "action_needed","newEvidenceCount": 1,
API Usage
Run this actor programmatically using the Apify API. Replace YOUR_API_TOKEN with your token from Apify Console → Settings → Integrations.
cURL
curl -X POST "https://api.apify.com/v2/acts/taroyamada~campaign-finance-lobbying-digest/run-sync-get-dataset-items?token=YOUR_API_TOKEN" \-H "Content-Type: application/json" \-d '{ "feeds": [ { "id": "google-lobbying", "name": "Google Lobbying Filings", "source": "lda_filings", "clientName": "Google" } ], "lookbackDays": 45, "maxEvidencePerFeed": 25, "maxPagesPerFeed": 2, "delivery": "dataset", "datasetMode": "all", "snapshotKey": "campaign-finance-lobbying-digest-state", "requestTimeoutSeconds": 30, "notifyOnNoNew": true, "dryRun": false }'
Python
from apify_client import ApifyClientclient = ApifyClient("YOUR_API_TOKEN")run = client.actor("taroyamada/campaign-finance-lobbying-digest").call(run_input={"feeds": [{"id": "google-lobbying","name": "Google Lobbying Filings","source": "lda_filings","clientName": "Google"}],"lookbackDays": 45,"maxEvidencePerFeed": 25,"maxPagesPerFeed": 2,"delivery": "dataset","datasetMode": "all","snapshotKey": "campaign-finance-lobbying-digest-state","requestTimeoutSeconds": 30,"notifyOnNoNew": true,"dryRun": false})for item in client.dataset(run["defaultDatasetId"]).iterate_items():print(item)
JavaScript / Node.js
import { ApifyClient } from 'apify-client';const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });const run = await client.actor('taroyamada/campaign-finance-lobbying-digest').call({"feeds": [{"id": "google-lobbying","name": "Google Lobbying Filings","source": "lda_filings","clientName": "Google"}],"lookbackDays": 45,"maxEvidencePerFeed": 25,"maxPagesPerFeed": 2,"delivery": "dataset","datasetMode": "all","snapshotKey": "campaign-finance-lobbying-digest-state","requestTimeoutSeconds": 30,"notifyOnNoNew": true,"dryRun": false});const { items } = await client.dataset(run.defaultDatasetId).listItems();console.log(items);
Tips & Limitations
- Run daily for active watchlists; weekly for passive monitoring.
- Webhook delivery works well for compliance team Slack channels — include docket URL for 1-click access.
- Use
watchKeywordsgenerously — false positives are cheap to triage, false negatives miss filings. - Pair with
regulatory-change-monitorfor cross-agency coverage. - Archive Dataset rows weekly for long-term compliance evidence retention.
FAQ
How far back does history go?
This actor monitors forward-only — new items since first run. For historical data, use the agency's own search tool.
What timezones are used?
All timestamps are UTC. Use your downstream pipeline to convert to agency-local time if needed.
Does it translate non-English content?
No — original language is preserved. Use downstream translation services if needed.
Is the data official?
Yes — sourced directly from official government websites and feeds. Not a third-party aggregator.
Can I use this for legal research?
For alerting and monitoring, yes. For litigation research, cross-verify with primary sources (agency websites) — this actor is a monitoring tool, not a legal database.
Related Actors
Government & Regulatory cluster — explore related Apify tools:
- EPA Enforcement Digest | ECHO Compliance Risk Monitor — Monitor EPA ECHO all-media facility search, corporate compliance screener, and enforcement case feeds with one summary-first digest row per watched company, facility, or case feed.
- FDA Warning Letters Digest | Summary-First Feed — Monitor public FDA warning letters with one summary-first digest row per configured feed.
- Federal Register Digest | Agency Rule & Notice Monitor — Monitor Federal Register documents — rules, proposed rules, and notices — per configured agency feed.
- Government Contract Award Monitor | Award & Competitor Watch — Monitor public-sector contract award notices for new wins, notable awardees, incumbent recompetes, and competitor signals — one digest row per configured feed without brittle broad crawling.
- Grants.gov Funding Digest | Opportunity Watch & Signal Digest — Monitor Grants.
- NHTSA Vehicle Recall Digest | Recalls + Complaints Watch — Monitor official NHTSA vehicle recall and complaint endpoints for watched model-family, VIN, and manufacturer feeds.
- Product Safety Recall Digest | CPSC + openFDA Alerts — Monitor CPSC saferproducts.
- Regulatory Change Monitor API — Monitor official regulator update feeds, government bulletin pages, and public compliance notices with one action-oriented digest row per monitored source.
- OFAC Sanctions Change Digest | SDN List Monitor — Monitor the OFAC SDN (Specially Designated Nationals) sanctions list for additions and removals.
- Tariff Trade Change Digest | Federal Register + HTS Monitor — Monitor U.
- Treasury Fiscal Data Digest | Debt, Rates & Budget Monitor — Monitor the U.
- USPTO Patent Monitor API | JSON + Webhook — Search and monitor US patent filings with multi-source fallback.
Cost
Pay Per Event:
actor-start: $0.01 (flat fee per run)dataset-item: $0.003 per output item
Example: 1,000 items = $0.01 + (1,000 × $0.003) = $3.01
No subscription required — you only pay for what you use.