EPA Toxic Release Inventory Scraper avatar

EPA Toxic Release Inventory Scraper

Pricing

from $7.00 / 1,000 results

Go to Apify Store
EPA Toxic Release Inventory Scraper

EPA Toxic Release Inventory Scraper

Scrape EPA Toxic Release Inventory (TRI) facility data. Get facility names, addresses, contacts, parent companies, and coordinates for 48,000+ regulated facilities. Filter by state with 49 data fields per record.

Pricing

from $7.00 / 1,000 results

Rating

0.0

(0)

Developer

ParseForge

ParseForge

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

1

Monthly active users

12 hours ago

Last modified

Share

ParseForge Banner

๐ŸŒฟ EPA Toxic Release Inventory Scraper

๐Ÿš€ Collect facility records, toxic release quantities, and EPA program registrations for thousands of industrial facilities across all 50 U.S. states. About 100 records per second.

๐Ÿ•’ Last updated: 2026-04-23

EPA TRI Scraper pulls environmental compliance data from the U.S. Environmental Protection Agency's Toxic Release Inventory. Each facility record includes 48+ fields: facility name, street address, city, state, ZIP, county, GPS coordinates, SIC code, parent company name, contact phone number, and EPA program details. You can choose from three datasets (TRI Facilities, Release Quantities, EPA Program Registrations) and filter by state.

Environmental researchers use this to map pollution sources and analyze chemical release patterns. Compliance officers monitor regulated facilities and their program enrollments. Journalists investigate which companies report toxic releases in specific communities. Real estate professionals assess environmental risk near properties. If you need structured EPA data without navigating clunky government portals, this actor delivers it in seconds.

TargetU.S. EPA Toxic Release Inventory
Use CasesPollution source mapping, environmental compliance monitoring, community health research, real estate risk assessment

๐Ÿ“‹ What it does

  • ๐Ÿญ Facility records. 48+ fields per facility including names, addresses, contacts, coordinates, and SIC codes.
  • โ˜ฃ๏ธ Release quantities. Toxic chemical release data reported by facilities across the country.
  • ๐Ÿ“‹ Program registrations. 50+ fields showing which EPA regulatory programs each facility participates in.
  • ๐Ÿ“ State filtering. Narrow results to any state using standard 2-letter codes.
  • ๐Ÿข Corporate ownership. Parent company names behind individual facility locations.

Each record gives you a full compliance profile for one facility: where it is, what it does, who owns it, how to contact it, and its GPS coordinates for mapping.

๐Ÿ’ก Why it matters: The EPA publishes this data through multiple web portals with limited search and export options. This actor pulls everything into one structured dataset, ready for analysis, mapping, or database import.


๐ŸŽฌ Full Demo

๐Ÿšง Coming soon: a 3-minute walkthrough showing how to go from sign-up to a downloaded dataset.


โš™๏ธ Input

InputTypeDefaultBehavior
maxItemsinteger10Maximum records to return. Free users are limited to 10. Paid users can set up to 1,000,000.
datasetstring"facilities"Which EPA dataset: facilities (48 fields), releases (7 fields), or programs (50 fields).
statestring-Filter by 2-letter state code (e.g. CA, TX, NY). Leave empty for all states.

Example: TRI facilities in California.

{
"dataset": "facilities",
"state": "CA",
"maxItems": 100
}

Example: release quantities in Texas.

{
"dataset": "releases",
"state": "TX",
"maxItems": 500
}

โš ๏ธ Good to Know: The TRI covers thousands of industrial facilities across all 50 states. The "facilities" dataset provides the broadest view with 48+ fields per record. State-level downloads typically complete in under a minute.


๐Ÿ“Š Output

Each record contains 48+ fields (facilities dataset). Download as CSV, Excel, JSON, or XML.

๐Ÿงพ Schema

FieldTypeExample
๐Ÿญ FACILITY_NAMEstring"ACME CHEMICAL PLANT"
๐Ÿ“ STREET_ADDRESSstring"1234 INDUSTRIAL BLVD"
๐Ÿ™๏ธ CITY_NAMEstring"LOS ANGELES"
๐Ÿ—บ๏ธ STATE_ABBRstring"CA"
๐Ÿ“ฎ ZIP_CODEstring"90001"
๐Ÿž๏ธ COUNTY_NAMEstring"LOS ANGELES"
๐Ÿ“ LATITUDEnumber33.9425
๐Ÿ“ LONGITUDEnumber-118.2551
๐Ÿท๏ธ PRIMARY_SIC_CODEstring"2819"
๐Ÿข PARENT_CO_NAMEstring"ACME CORP"
๐Ÿ“ž FACILITY_PHONEstring"2135551234"
๐Ÿ“… scrapedAtstring"2026-04-17T12:00:00.000Z"

๐Ÿ“ฆ Sample records


โœจ Why choose this Actor

Capability
๐Ÿญ48+ fields per facility. Names, addresses, contacts, coordinates, SIC codes, and parent companies.
โ˜ฃ๏ธThree datasets. Facilities, release quantities, and EPA program registrations in one tool.
๐Ÿ“GPS coordinates. Latitude and longitude for mapping and spatial analysis.
๐Ÿ—บ๏ธState filtering. Narrow results instantly with 2-letter state codes.
โšก100 records per second. Fast processing for large state-level downloads.
๐ŸขCorporate ownership. Parent company names for tracking corporate environmental exposure.
๐Ÿ“ŠStructured output. Ready for spreadsheets, databases, or GIS applications.

The EPA Toxic Release Inventory covers over 20,000 industrial facilities across the United States reporting on more than 650 toxic chemicals.


๐Ÿ“ˆ How it compares to alternatives

ApproachCostCoverageRefreshSetup
โญ EPA TRI Scraper (this Actor)$5 free credit, then pay-per-use3 datasets, 48+ fieldsLive per runโšก 2 min
EPA TRI Explorer websiteFreeFull, limited exportManualSlow navigation
Bulk data downloadsFreeFull, large filesAnnual updatesTechnical setup
Third-party data providers$500+/monthVariesQuarterlyDays

Pick this actor when you need filtered EPA facility data without navigating government portals or processing bulk download files.


๐Ÿš€ How to use

  1. ๐Ÿ“ Sign up. Create a free account with $5 credit (takes 2 minutes).
  2. ๐ŸŒ Open the Actor. Go to the EPA TRI Scraper page on the Apify Store.
  3. ๐ŸŽฏ Set input. Choose your dataset, enter a state code, and set max items.
  4. ๐Ÿš€ Run it. Click Start and let the Actor collect your data.
  5. ๐Ÿ“ฅ Download. Grab your results in the Dataset tab as CSV, Excel, JSON, or XML.

โฑ๏ธ Total time from signup to downloaded dataset: 3-5 minutes. No coding required.


๐Ÿ’ผ Business use cases

๐ŸŒ Environmental Research

  • Map pollution sources by state and county
  • Analyze chemical release patterns across industries
  • Track facility density near waterways and populated areas
  • Build datasets for environmental impact studies

โš–๏ธ Regulatory Compliance

  • Monitor regulated facilities and their EPA program enrollments
  • Identify facilities missing required program registrations
  • Track parent company exposure across multiple states
  • Build audit-ready compliance databases

๐Ÿ  Real Estate and Insurance

  • Assess environmental risk near properties and developments
  • Check for TRI facilities within a radius of investment sites
  • Evaluate contamination risk for property valuations
  • Build risk maps using facility GPS coordinates

๐Ÿ“ฐ Journalism and Advocacy

  • Investigate which companies report toxic releases in your community
  • Track industry trends in chemical usage and disposal
  • Compare state-level environmental enforcement patterns
  • Build data-driven stories about pollution and public health


๐ŸŒŸ Beyond business use cases

Data like this powers more than commercial workflows. The same structured records support research, education, civic projects, and personal initiatives.

๐ŸŽ“ Research and academia

  • Empirical datasets for papers, thesis work, and coursework
  • Longitudinal studies tracking changes across snapshots
  • Reproducible research with cited, versioned data pulls
  • Classroom exercises on data analysis and ethical scraping

๐ŸŽจ Personal and creative

  • Side projects, portfolio demos, and indie app launches
  • Data visualizations, dashboards, and infographics
  • Content research for bloggers, YouTubers, and podcasters
  • Hobbyist collections and personal trackers

๐Ÿค Non-profit and civic

  • Transparency reporting and accountability projects
  • Advocacy campaigns backed by public-interest data
  • Community-run databases for local issues
  • Investigative journalism on public records

๐Ÿงช Experimentation

  • Prototype AI and machine-learning pipelines with real data
  • Validate product-market hypotheses before engineering spend
  • Train small domain-specific models on niche corpora
  • Test dashboard concepts with live input

๐Ÿค– Ask an AI assistant about this scraper

Open a ready-to-send prompt about this ParseForge actor in the AI of your choice:

โ“ Frequently Asked Questions

๐Ÿ’ณ Do I need a paid Apify plan to run this actor?

No. You can start right now on the free Apify plan, which includes $5 in free monthly credit. That is enough to run this actor several times and explore the output before committing to anything. Paid plans unlock higher limits, more concurrent runs, and larger datasets. Create a free Apify account here to get started.

๐Ÿšจ What happens if my run fails or returns no results?

Failed runs are not charged. If the source site changes, proxies get rate-limited, or a specific input matches nothing, re-run the actor or open our contact form and we will investigate. You can also check the run log in the Apify console to see why the run stopped.

๐Ÿ“ How many items can I scrape per run?

Free users are limited to 10 items per run so you can preview the output and confirm the actor works for your use case. Paid users can raise maxItems up to 1,000,000 per run. Upgrade here if you need full scale.

๐Ÿ•’ How fresh is the data?

Every run fetches live data at the moment of execution. There is no cache or delay: the records you get reflect what the source returned at that moment. Schedule the actor to maintain a rolling snapshot of the data you need.

๐Ÿง‘โ€๐Ÿ’ป Can I call this actor from my own code?

Yes. Apify exposes every actor as a REST endpoint and ships first-class SDKs for Node.js and Python. You can start a run, read the dataset, and handle webhooks from your own app in a few lines. All you need is your Apify API token.

๐Ÿ“ค How do I export the data?

Every Apify dataset can be downloaded in one click from the console as CSV, JSON, JSONL, Excel, HTML, XML, or RSS. You can also pull results programmatically via the Apify API or stream them into BigQuery, S3, and other destinations through built-in integrations.

๐Ÿ“… Can I schedule the actor to run automatically?

Yes. Use the Apify scheduler to run the actor on any cadence, from hourly to monthly. Results are saved to your dataset and can be delivered to webhooks, email, Slack, cloud storage, or automation tools such as Zapier and Make.


๐Ÿ”Œ Automating EPA TRI Scraper

Control the scraper programmatically for scheduled runs and pipeline integrations:

  • ๐ŸŸข Node.js. Install the apify-client NPM package.
  • ๐Ÿ Python. Use the apify-client PyPI package.
  • ๐Ÿ“š See the Apify API documentation for full details.

The Apify Schedules feature lets you trigger this Actor on any cron interval. Schedule monthly runs to monitor changes in facility registrations and release reporting.

๐Ÿ”Œ Integrate with any app

EPA TRI Scraper connects to any cloud service via Apify integrations:

  • Make - Automate multi-step workflows
  • Zapier - Connect with 5,000+ apps
  • Slack - Get run notifications
  • Airbyte - Pipe data into your warehouse
  • GitHub - Trigger runs from commits
  • Google Drive - Export datasets straight to Sheets

You can also use webhooks to trigger downstream actions when a run finishes.


๐Ÿ’ก Pro Tip: browse the complete ParseForge collection for more data scrapers and tools.


๐Ÿ†˜ Need Help? Open our contact form to request a new scraper, propose a custom data project, or report an issue.


โš ๏ธ Disclaimer: this Actor is an independent tool and is not affiliated with, endorsed by, or sponsored by the U.S. Environmental Protection Agency (EPA). All trademarks mentioned are the property of their respective owners. Only publicly available data is collected.