Gofundme Campaign Scraper avatar

Gofundme Campaign Scraper

Pricing

Pay per usage

Go to Apify Store
Gofundme Campaign Scraper

Gofundme Campaign Scraper

Pricing

Pay per usage

Rating

0.0

(0)

Developer

Donny Nguyen

Donny Nguyen

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

1

Monthly active users

3 days ago

Last modified

Categories

Share

Scrape GoFundMe campaign pages to extract fundraising data, donor counts, campaign stories, organizer information, and recent donations. Monitor crowdfunding trends, track campaign performance, or build datasets of fundraising campaigns at scale.

What does GoFundMe Campaign Scraper do?

This actor takes a list of GoFundMe campaign URLs and extracts detailed information from each campaign page. It uses a headless browser with stealth capabilities to reliably access campaign data, including amounts raised, funding goals, donor counts, the full campaign story, and the most recent donations.

Use this scraper to:

  • Track fundraising progress across multiple campaigns over time
  • Analyze crowdfunding trends by extracting campaign data at scale
  • Monitor competitor campaigns for benchmarking and research
  • Build datasets of GoFundMe campaigns for academic or market research
  • Aggregate donation data to understand giving patterns and donor behavior

Input

FieldTypeDescriptionDefault
urlsArray of stringsRequired. List of GoFundMe campaign URLs to scrape. Each URL should point to an individual campaign page (e.g., https://www.gofundme.com/f/campaign-name).[]
maxResultsIntegerMaximum number of campaign pages to scrape in a single run. Controls cost and execution time.100
useResidentialProxyBooleanEnable residential proxies for improved success rates against anti-bot protection. Recommended for production use.true

Example Input

{
"urls": [
"https://www.gofundme.com/f/example-campaign-1",
"https://www.gofundme.com/f/example-campaign-2"
],
"maxResults": 50,
"useResidentialProxy": true
}

Output

Each scraped campaign produces a JSON object with the following fields:

FieldTypeDescription
urlStringThe GoFundMe campaign URL that was scraped
titleStringCampaign title or headline
organizerStringName of the campaign organizer
raisedAmountNumberTotal amount raised so far in USD
fundingGoalNumberCampaign funding goal in USD
donorCountIntegerTotal number of donors or supporters
categoryStringCampaign category (e.g., Medical, Emergency, Education)
storyStringCampaign story text (truncated to 3,000 characters)
createdDateStringDate the campaign was created
recentDonationsCountIntegerNumber of recent donations extracted (up to 10)
recentDonationsArrayList of recent donations, each with name, amount, and time
errorStringError code if scraping failed (e.g., 403, 429, REQUEST_FAILED)
errorMessageStringHuman-readable error description
scrapedAtStringISO 8601 timestamp of when the data was scraped

Example Output

{
"url": "https://www.gofundme.com/f/example-campaign",
"title": "Help John with Medical Expenses",
"organizer": "Jane Smith",
"raisedAmount": 15420,
"fundingGoal": 25000,
"donorCount": 312,
"category": "Medical",
"story": "John was recently diagnosed with...",
"createdDate": "2025-11-15",
"recentDonationsCount": 10,
"recentDonations": [
{
"name": "Sarah K.",
"amount": "$50",
"time": "2 hours ago"
},
{
"name": "Anonymous",
"amount": "$100",
"time": "5 hours ago"
}
],
"scrapedAt": "2026-02-11T12:00:00.000Z"
}

Proxy Configuration

This scraper supports both residential and datacenter proxies:

  • Residential proxies (default): Higher success rates against GoFundMe's anti-bot systems. Recommended for production scraping.
  • Datacenter proxies: Lower cost but may experience higher block rates. Suitable for small-scale testing.

Limitations

  • Requires valid GoFundMe campaign URLs as input (does not discover or search for campaigns)
  • Campaign story text is truncated to 3,000 characters to manage dataset size
  • Recent donations are limited to the 10 most recent visible on the page
  • Some campaigns may have restricted access or may have been removed
  • Rate limiting may occur with large numbers of requests in a short period

Tips for Best Results

  1. Use residential proxies for the most reliable scraping results
  2. Start with a small batch of URLs to verify output before scaling up
  3. Monitor error counts in the status messages to detect blocking patterns
  4. Space out large runs to avoid triggering rate limits
  5. Check the error field in results to identify campaigns that need re-scraping