⚖️ Campaign Finance Scraper avatar

⚖️ Campaign Finance Scraper

Pricing

from $10.00 / 1,000 results

Go to Apify Store
⚖️ Campaign Finance Scraper

⚖️ Campaign Finance Scraper

Track federal campaign filings and lobbying activity to ensure compliance. Extract docket numbers, filing dates, and direct URLs for your watchlist.

Pricing

from $10.00 / 1,000 results

Rating

0.0

(0)

Developer

太郎 山田

太郎 山田

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

1

Monthly active users

11 days ago

Last modified

Share

Campaign Finance & Lobbying Digest | FEC + LDA Watch

Monitor official FEC OpenFEC committee reports and LDA.gov filings with summary-first output — one digest row per committee or lobbying watch feed, plus change detection, action-needed flags, and nested evidence.

Store Quickstart

Run this actor with your target input. Results appear in the Apify Dataset and can be piped to webhooks for real-time delivery. Use dryRun to validate before committing to a schedule.

Key Features

  • 🏛️ Government-sourced — Pulls directly from official agency feeds — no third-party aggregators
  • ⏱️ Timely digests — Daily/weekly rollups of new filings, rulings, or actions
  • 🔍 Keyword watchlists — Flag items matching your compliance/legal watch terms
  • 📊 Structured metadata — Agency, date, docket, document type, link — all dataset-ready
  • 📡 Webhook alerts — Push to legal/compliance teams the moment new items match watchlist

Use Cases

WhoWhy
DevelopersAutomate recurring data fetches without building custom scrapers
Data teamsPipe structured output into analytics warehouses
Ops teamsMonitor changes via webhook alerts
Product managersTrack competitor/market signals without engineering time

Input

FieldTypeDefaultDescription
feedsarrayprefilledOne digest row is emitted per configured feed. Use lda_filings for anonymous/no-key quickstarts. Use fec_committee when
watchTermsstringComma- or newline-separated terms. Matching committees, clients, issues, or agencies elevate digests to action_needed.
lookbackDaysinteger45Default lookback window used when a feed-specific override is not provided.
maxEvidencePerFeedinteger25Upper bound on normalized evidence items retained for each feed.
maxPagesPerFeedinteger2Upper bound on paginated API requests per feed during one run.
deliverystring"dataset"dataset writes digest rows to the Apify dataset. webhook POSTs the digest payload to webhookUrl.
webhookUrlstringPOST target used when delivery=webhook.
datasetModestring"all"Choose whether dataset/webhook delivery includes all digests, only action-needed digests, or only feeds with new evidenc

Input Example

{
"feeds": [
{
"id": "google-lobbying",
"name": "Google Lobbying Filings",
"source": "lda_filings",
"clientName": "Google"
}
],
"lookbackDays": 45,
"maxEvidencePerFeed": 25,
"maxPagesPerFeed": 2,
"delivery": "dataset",
"datasetMode": "all",
"snapshotKey": "campaign-finance-lobbying-digest-state",
"requestTimeoutSeconds": 30,
"notifyOnNoNew": true,
"dryRun": false
}

Output

FieldTypeDescription
metaobject
errorsarray
digestsarray
digests[].feedIdstring
digests[].feedNamestring
digests[].sourcestring
digests[].checkedAttimestamp
digests[].statusstring
digests[].newEvidenceCountnumber
digests[].totalEvidenceCountnumber
digests[].latestEvidenceAttimestamp
digests[].changedSinceLastRunboolean
digests[].changeReasonsarray
digests[].actionNeededboolean
digests[].recommendedActionstring
digests[].trackedEntityobject
digests[].summaryMetricsobject
digests[].signalTagsarray
digests[].watchTermHitsarray
digests[].highlightsarray
digests[].evidencearray
digests[].errornull

Output Example

{
"meta": {
"generatedAt": "2026-04-06T11:48:58.577Z",
"now": "2026-04-25T00:00:00.000Z",
"feedCount": 2,
"totalEvidenceCount": 2,
"newEvidenceCount": 2,
"actionNeededCount": 2,
"errorCount": 0,
"snapshot": {
"key": "campaign-finance-sample",
"loadedFrom": "local",
"savedTo": "local"
},
"warnings": [],
"executiveSummary": {
"overallStatus": "action_needed",
"brief": "2 feed(s) require review based on new filings, amendments, or watch-term hits.",
"actionItems": [
"Review 2 action-needed feed(s): Clean Energy PAC, Solar Lobbying Feed",
"Assess watch-term hits: Solar",
"2 new filing(s) arrived — capture material committee or lobbying changes before the next cycle review"
],
"watchTermHits": [
{
"term": "Solar",
"evidenceId": "lda:lda-1002",
"title": "LD-1 Amendment • Policy Advocates LLC / Solar Storage Coalition",
"filedAt": "2026-04-18T00:00:00.000Z",
"target": "Policy Advocates LLC / Solar Storage Coalition",
"url": "https://lda.senate.gov/filings/public/lda-1002"
}
],
"feedStatuses": [
{
"feedId": "clean-energy-pac",
"feedName": "Clean Energy PAC",
"source": "fec_committee",
"status": "action_needed",
"newEvidenceCount": 1,

API Usage

Run this actor programmatically using the Apify API. Replace YOUR_API_TOKEN with your token from Apify Console → Settings → Integrations.

cURL

curl -X POST "https://api.apify.com/v2/acts/taroyamada~campaign-finance-lobbying-digest/run-sync-get-dataset-items?token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{ "feeds": [ { "id": "google-lobbying", "name": "Google Lobbying Filings", "source": "lda_filings", "clientName": "Google" } ], "lookbackDays": 45, "maxEvidencePerFeed": 25, "maxPagesPerFeed": 2, "delivery": "dataset", "datasetMode": "all", "snapshotKey": "campaign-finance-lobbying-digest-state", "requestTimeoutSeconds": 30, "notifyOnNoNew": true, "dryRun": false }'

Python

from apify_client import ApifyClient
client = ApifyClient("YOUR_API_TOKEN")
run = client.actor("taroyamada/campaign-finance-lobbying-digest").call(run_input={
"feeds": [
{
"id": "google-lobbying",
"name": "Google Lobbying Filings",
"source": "lda_filings",
"clientName": "Google"
}
],
"lookbackDays": 45,
"maxEvidencePerFeed": 25,
"maxPagesPerFeed": 2,
"delivery": "dataset",
"datasetMode": "all",
"snapshotKey": "campaign-finance-lobbying-digest-state",
"requestTimeoutSeconds": 30,
"notifyOnNoNew": true,
"dryRun": false
})
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
print(item)

JavaScript / Node.js

import { ApifyClient } from 'apify-client';
const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });
const run = await client.actor('taroyamada/campaign-finance-lobbying-digest').call({
"feeds": [
{
"id": "google-lobbying",
"name": "Google Lobbying Filings",
"source": "lda_filings",
"clientName": "Google"
}
],
"lookbackDays": 45,
"maxEvidencePerFeed": 25,
"maxPagesPerFeed": 2,
"delivery": "dataset",
"datasetMode": "all",
"snapshotKey": "campaign-finance-lobbying-digest-state",
"requestTimeoutSeconds": 30,
"notifyOnNoNew": true,
"dryRun": false
});
const { items } = await client.dataset(run.defaultDatasetId).listItems();
console.log(items);

Tips & Limitations

  • Run daily for active watchlists; weekly for passive monitoring.
  • Webhook delivery works well for compliance team Slack channels — include docket URL for 1-click access.
  • Use watchKeywords generously — false positives are cheap to triage, false negatives miss filings.
  • Pair with regulatory-change-monitor for cross-agency coverage.
  • Archive Dataset rows weekly for long-term compliance evidence retention.

FAQ

How far back does history go?

This actor monitors forward-only — new items since first run. For historical data, use the agency's own search tool.

What timezones are used?

All timestamps are UTC. Use your downstream pipeline to convert to agency-local time if needed.

Does it translate non-English content?

No — original language is preserved. Use downstream translation services if needed.

Is the data official?

Yes — sourced directly from official government websites and feeds. Not a third-party aggregator.

Can I use this for legal research?

For alerting and monitoring, yes. For litigation research, cross-verify with primary sources (agency websites) — this actor is a monitoring tool, not a legal database.

Government & Regulatory cluster — explore related Apify tools:

Cost

Pay Per Event:

  • actor-start: $0.01 (flat fee per run)
  • dataset-item: $0.003 per output item

Example: 1,000 items = $0.01 + (1,000 × $0.003) = $3.01

No subscription required — you only pay for what you use.