Insurance Risk Assessment
Pricing
$300.00 / 1,000 analysis runs
Insurance Risk Assessment
Comprehensive insurance underwriting intelligence tool that combines 8 data sources — FEMA disaster declarations, USGS earthquake data, NOAA weather alerts, UK flood warnings, UK police crime data, OpenAQ air quality measurements, UK Land Registry property records, and geocoding — to produce a...
Pricing
$300.00 / 1,000 analysis runs
Rating
0.0
(0)
Developer
ryan clinton
Actor stats
0
Bookmarked
1
Total users
0
Monthly active users
an hour ago
Last modified
Categories
Share
Insurance risk assessment for any location — combining 8 public data sources and 4 scoring models into a single underwriting brief. Built for P&C underwriters, actuaries, and commercial real estate insurers who need structured, data-driven location risk intelligence on demand.
Enter any address, city, or region and receive a composite risk score (0-100), a risk tier (PREFERRED through DECLINE), a premium modifier multiplier, and actionable underwriting notes — all in one structured JSON output. The actor queries FEMA disaster declarations, USGS earthquakes, NOAA weather alerts, flood warnings, crime statistics, air quality measurements, and land registry records in parallel, then synthesizes the data through four weighted scoring models.
What data can you extract?
| Data Point | Source | Example |
|---|---|---|
| 📊 Composite risk score | All 8 data sources combined | 62 (0-100 scale) |
| 🏷️ Risk tier classification | Scoring engine | SUBSTANDARD |
| 💰 Premium modifier | Composite risk formula | 1.85x (85% surcharge) |
| 🌀 Composite peril score | FEMA + USGS + NOAA + Flood | 68/100, HIGH |
| 🌡️ Climate trajectory | FEMA disaster trend analysis | WORSENING, 25yr projection: 71 |
| 🔴 Crime exposure score | UK Police crime data | 42/100, MODERATE |
| ☁️ Environmental contamination | OpenAQ air quality | 28/100, ACCEPTABLE, 1 WHO exceedance |
| 🌊 Flood risk | UK Flood Warnings | 3 active warnings, severe |
| 🌍 Geocoded coordinates | Nominatim geocoder | { lat: 25.7617, lon: -80.1918 } |
| ⚠️ Risk signals | All scoring models | ["14 FEMA major disaster declarations", ...] |
| 📝 Underwriting notes | Scoring engine | ["Dominant peril: FEMA — consider sublimits"] |
| 🏠 Property transaction records | UK Land Registry | Up to 10 recent sales records |
Why use Insurance Risk Assessment?
Manual location risk research across FEMA, USGS, NOAA, and crime databases takes an experienced analyst 2-3 hours per location. Licensing structured risk data from commercial insurtech platforms costs $500-2,000 per month for API access. This actor performs the same multi-source data aggregation and delivers a scoring brief in under 4 minutes, for a fraction of that cost.
This actor automates the entire data collection and scoring pipeline — geocoding the target address, running all 8 data sources simultaneously, applying the four scoring models, and assembling a structured underwriting brief ready for human review or downstream systems.
- Scheduling — run daily or weekly to track how a location's risk profile changes over time, particularly for climate trajectory
- API access — trigger assessments from Python, JavaScript, or any HTTP client — integrate into your policy management system or underwriting portal
- Proxy rotation — all sub-actor calls use Apify's managed infrastructure, not your own IP or API keys
- Monitoring — configure Slack or email alerts when a run fails or produces a DECLINE-tier result
- Integrations — pipe results into Zapier, Make, Google Sheets, HubSpot, or any webhook endpoint
Features
- 8 data sources queried in parallel — FEMA disaster declarations, USGS earthquake database, NOAA weather alerts, UK Flood Warnings, UK Police crime data, OpenAQ air quality stations, UK Land Registry, and Nominatim geocoding, all running concurrently to minimize latency
- Composite peril model (35% weight) — scores FEMA major disaster history (3 pts each, max 30), magnitude-weighted seismic risk (M6.0+ = 10 pts, M5.0+ = 5 pts, M4.0+ = 2 pts, max 25), weather alert severity (EXTREME = 8 pts, SEVERE = 5 pts, max 25), and flood warning severity (max 20)
- Climate trajectory model (25% weight) — compares disaster frequency in the most recent decade against the prior decade to compute an acceleration ratio, then projects risk forward at 5-year, 10-year, and 25-year horizons using exponential scaling
- Crime exposure model (20% weight) — categorizes offenses into violent (violence, robbery, assault, weapons, sexual, homicide) at 4 pts each and property (burglary, theft, vehicle crime, arson) at 2 pts each, plus logarithmic volume scoring and anti-social behavior nuisance weighting
- Environmental contamination model (20% weight) — measures 6 pollutants (PM2.5, PM10, NO2, SO2, O3, CO) against WHO guideline thresholds (PM2.5: 15 µg/m³, PM10: 45 µg/m³, NO2: 25 µg/m³); each exceedance adds up to 10 points with ratio-weighted severity
- Four risk tiers — PREFERRED (0-24), STANDARD (25-49), SUBSTANDARD (50-74), DECLINE (75+), each mapped to a specific premium modifier range
- Premium modifier formula — linear scaling from 0.8x (preferred-risk discount) to 2.5x (high-risk surcharge) using the formula
0.8 + (compositeRiskScore / 100) * 1.7 - Dominant peril identification — ranks the four peril categories by score to surface the primary driver for targeted exclusion or sublimit recommendations
- Actionable underwriting notes — auto-generated notes include senior underwriter referrals, exclusion/sublimit guidance, climate review interval warnings, security requirement recommendations, and environmental liability endorsement flags
- Graceful degradation — uses
Promise.allSettledso a failed sub-actor does not abort the assessment; missing data sources receive zero input to that model dimension - Text-fallback geocoding — if geocoding returns no coordinates, the actor switches to text-query mode for all sub-actors so non-standard location strings still produce an assessment
Use cases for insurance risk assessment
P&C underwriting and pre-bind risk screening
Property and casualty underwriters can run this actor before binding any commercial or residential policy in an unfamiliar location. The composite risk score and premium modifier provide a data-driven starting point for rate negotiation, and the underwriting notes flag specific actions — exclusions, sublimits, security requirements — without requiring a senior analyst to spend hours on FEMA portals.
Actuarial portfolio modeling and climate scenario analysis
Actuaries building long-term reserve models need forward-looking risk data, not just current-year loss ratios. The 5-year, 10-year, and 25-year climate trajectory projections — derived from FEMA disaster acceleration ratios — feed directly into catastrophe models. Run the actor across an entire policy portfolio to generate location-level risk scores for aggregation and concentration analysis.
Commercial real estate insurance and multi-location portfolio assessment
Commercial real estate insurers evaluating a portfolio of properties across different regions can run one assessment per address and compare composite risk scores, dominant perils, and premium modifiers across all locations. Identifying which properties in a portfolio fall into SUBSTANDARD or DECLINE tier helps prioritize risk engineering visits or renegotiate terms.
Reinsurance treaty analysis and cedant exposure review
Reinsurance analysts assessing a cedant's geographic concentration risk can run assessments across the cedant's top exposure locations. The climate trajectory trend direction (STABLE, WORSENING, RAPIDLY_WORSENING) and the acceleration ratio identify which zones carry increasing long-tail risk that may not be reflected in historical loss data.
Claims investigation and post-event exposure context
Claims adjusters reviewing large losses can use this actor to understand the full risk profile of a loss location: FEMA disaster history, earthquake exposure, flood warnings in effect at time of loss, and crime context. This provides documented support for coverage decisions and reserve estimates.
InsurTech product development and automated underwriting engines
Development teams building automated underwriting APIs can call this actor as a scoring microservice, passing the structured JSON output into their own decision rules engine. The machine-readable risk tier and premium modifier slot directly into policy pricing workflows without manual intervention.
How to run an insurance risk assessment
- Enter the location — Type any address, city, or region into the Location field (e.g., "Miami, FL", "Tower Bridge, London", "Houston, TX 77002"). The actor geocodes it automatically before querying data sources.
- Add optional context — Optionally specify Property Type (e.g., "commercial", "residential", "industrial") and Coverage Type (e.g., "property", "liability", "comprehensive") to include those labels in the output for your records.
- Click Start and wait — The actor queries 8 data sources in parallel and runs 4 scoring models. Most assessments complete in 2-4 minutes depending on data source response times.
- Download the underwriting brief — Open the Dataset tab and download your result as JSON, CSV, or Excel. Each row is a complete assessment with all scores, signals, and notes included.
Input parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
location | string | Yes | — | Address, city, or region to assess (e.g., "Miami, FL", "London EC1A 1BB") |
propertyType | string | No | null | Property type for context labeling: residential, commercial, industrial |
coverageType | string | No | null | Coverage type for context labeling: property, liability, comprehensive |
Input examples
Standard single-location assessment:
{"location": "Miami, FL","propertyType": "commercial","coverageType": "property"}
Residential policy pre-bind screening:
{"location": "1234 Oak Street, Houston, TX 77002","propertyType": "residential","coverageType": "comprehensive"}
Minimal input — location only:
{"location": "San Francisco, CA"}
Input tips
- Use specific addresses for best results — a full street address geocodes more precisely than a city name alone, resulting in more accurate radius-based queries for air quality and crime data
- City names work fine for regional screening — when evaluating a city or region broadly, just the city name is sufficient; the actor will use text-based queries where geocoding returns no coordinates
- Run one assessment per property — the actor produces one output record per run; for portfolio analysis, use the API to trigger parallel runs for each address
- Include property and coverage type for downstream use — these fields pass through to the output, making it easier to filter and sort results in your CRM or spreadsheet
Output example
{"location": "Miami, FL","coordinates": { "lat": 25.7617, "lon": -80.1918 },"propertyType": "commercial","coverageType": "property","assessmentDate": "2026-03-20T14:30:00.000Z","compositeRiskScore": 62,"riskTier": "SUBSTANDARD","premiumModifier": 1.85,"compositePeril": {"score": 74,"disasterCount": 18,"earthquakeRisk": 3.5,"weatherAlerts": 5,"floodRisk": 3,"perilLevel": "HIGH","dominantPeril": "Natural Disaster (FEMA)","signals": ["18 FEMA major disaster declarations in area","5 active weather alerts, including severe/extreme"]},"environmentalContamination": {"score": 28,"airQualityIndex": 14,"pollutantCount": 3,"exceedances": 1,"contaminationLevel": "ACCEPTABLE","pollutants": [{ "parameter": "pm25", "value": 11.4, "unit": "µg/m³" },{ "parameter": "pm10", "value": 38.2, "unit": "µg/m³" },{ "parameter": "no2", "value": 27.8, "unit": "µg/m³" }],"signals": []},"crimeExposure": {"score": 44,"totalCrimes": 47,"violentCrimes": 9,"propertyCrimes": 16,"crimeRate": 47,"exposureLevel": "MODERATE","topCategories": [{ "category": "theft", "count": 12 },{ "category": "violence and sexual offences", "count": 9 },{ "category": "burglary", "count": 7 },{ "category": "anti-social-behaviour", "count": 6 }],"signals": ["16 property crimes — high theft/damage exposure"]},"climateTrajectory": {"score": 57,"currentPerilScore": 27,"projectedRisk5yr": 60,"projectedRisk10yr": 64,"projectedRisk25yr": 73,"trendDirection": "WORSENING","climateFactors": ["Accelerating disaster frequency","Extreme weather patterns"],"signals": ["Disaster frequency accelerating — 50%+ increase over prior decade","3 extreme weather events — elevated climate trajectory"]},"allSignals": ["18 FEMA major disaster declarations in area","5 active weather alerts, including severe/extreme","16 property crimes — high theft/damage exposure","Disaster frequency accelerating — 50%+ increase over prior decade","3 extreme weather events — elevated climate trajectory"],"underwritingNotes": ["Dominant peril: Natural Disaster (FEMA) — consider exclusions or sublimits","Climate trajectory worsening — review at shorter intervals"],"propertyData": [{"price": 485000,"date": "2025-11-14","postcode": "FL 33101","propertyType": "Detached","oldOrNew": "E","duration": "Freehold"}],"dataSources": {"fema-disaster-search": 18,"usgs-earthquake-search": 4,"noaa-weather-alerts": 5,"uk-flood-warnings": 3,"uk-police-crime-data": 47,"openaq-air-quality": 12,"uk-land-registry": 8}}
Output fields
| Field | Type | Description |
|---|---|---|
location | string | The input location string, passed through unchanged |
coordinates | object | null | Geocoded { lat, lon } from Nominatim; null if geocoding failed |
propertyType | string | null | Input property type label, passed through |
coverageType | string | null | Input coverage type label, passed through |
assessmentDate | string | ISO 8601 timestamp when the assessment ran |
compositeRiskScore | number | Weighted composite risk score, 0-100 |
riskTier | string | PREFERRED, STANDARD, SUBSTANDARD, or DECLINE |
premiumModifier | number | Premium multiplier: 0.80 (discount) to 2.50 (surcharge) |
compositePeril.score | number | Peril sub-score, 0-100 |
compositePeril.disasterCount | number | Total FEMA disaster records found |
compositePeril.earthquakeRisk | number | Magnitude-weighted seismic exposure score |
compositePeril.weatherAlerts | number | Count of active NOAA weather alerts |
compositePeril.floodRisk | number | Count of flood warning records |
compositePeril.perilLevel | string | MINIMAL, LOW, MODERATE, HIGH, or SEVERE |
compositePeril.dominantPeril | string | Highest-scoring peril category name |
compositePeril.signals | array | Human-readable trigger strings for notable findings |
environmentalContamination.score | number | Environmental sub-score, 0-100 |
environmentalContamination.airQualityIndex | number | Average AQI-weighted score across all measurements |
environmentalContamination.pollutantCount | number | Number of distinct pollutant types detected |
environmentalContamination.exceedances | number | Number of pollutants exceeding WHO guidelines |
environmentalContamination.contaminationLevel | string | CLEAN, ACCEPTABLE, ELEVATED, or HAZARDOUS |
environmentalContamination.pollutants | array | Up to 10 { parameter, value, unit } measurement records |
crimeExposure.score | number | Crime sub-score, 0-100 |
crimeExposure.totalCrimes | number | Total crime records retrieved |
crimeExposure.violentCrimes | number | Count of violent offense records |
crimeExposure.propertyCrimes | number | Count of property offense records |
crimeExposure.exposureLevel | string | LOW, MODERATE, HIGH, or EXTREME |
crimeExposure.topCategories | array | Up to 8 { category, count } objects ranked by frequency |
climateTrajectory.score | number | Climate trajectory sub-score, 0-100 |
climateTrajectory.currentPerilScore | number | Current weather and flood peril contribution |
climateTrajectory.projectedRisk5yr | number | Projected risk score in 5 years |
climateTrajectory.projectedRisk10yr | number | Projected risk score in 10 years |
climateTrajectory.projectedRisk25yr | number | Projected risk score in 25 years |
climateTrajectory.trendDirection | string | IMPROVING, STABLE, WORSENING, or RAPIDLY_WORSENING |
climateTrajectory.climateFactors | array | Named climate risk factors identified |
allSignals | array | Merged list of all risk signal strings from all models |
underwritingNotes | array | Actionable guidance strings for underwriters |
propertyData | array | Up to 10 land registry property transaction records |
dataSources | object | Record counts per data source (e.g., { "fema-disaster-search": 14 }) |
How much does it cost to run an insurance risk assessment?
Insurance Risk Assessment uses pay-per-result pricing — each run costs approximately $0.25-$0.60 in Apify platform credits, depending on how many sub-actors return data. Compute costs for the orchestration layer are included.
| Scenario | Assessments | Cost per assessment | Total cost |
|---|---|---|---|
| Quick test | 1 | ~$0.40 | ~$0.40 |
| Small batch | 5 | ~$0.40 | ~$2.00 |
| Medium batch | 20 | ~$0.40 | ~$8.00 |
| Large batch | 100 | ~$0.40 | ~$40.00 |
| Enterprise portfolio | 500 | ~$0.40 | ~$200.00 |
You can set a maximum spending limit per run to control costs. The actor stops when your budget is reached.
Compare this to commercial insurtech data platforms like Verisk, LexisNexis Risk Solutions, or Cape Analytics at $500-2,000 per month for API access — with this actor, most teams spend $5-40 per month with no subscription commitment. Apify's free tier also includes $5 of monthly credits, covering approximately 8-12 assessments per month at no cost.
Insurance risk assessment using the API
Python
from apify_client import ApifyClientclient = ApifyClient("YOUR_API_TOKEN")run = client.actor("ryanclinton/insurance-risk-assessment").call(run_input={"location": "Miami, FL","propertyType": "commercial","coverageType": "property"})for item in client.dataset(run["defaultDatasetId"]).iterate_items():print(f"Location: {item['location']}")print(f"Risk Score: {item['compositeRiskScore']}/100 — Tier: {item['riskTier']}")print(f"Premium Modifier: {item['premiumModifier']}x")print(f"Dominant Peril: {item['compositePeril']['dominantPeril']}")print(f"Underwriting Notes: {item['underwritingNotes']}")
JavaScript
import { ApifyClient } from "apify-client";const client = new ApifyClient({ token: "YOUR_API_TOKEN" });const run = await client.actor("ryanclinton/insurance-risk-assessment").call({location: "Miami, FL",propertyType: "commercial",coverageType: "property"});const { items } = await client.dataset(run.defaultDatasetId).listItems();for (const item of items) {console.log(`Risk Score: ${item.compositeRiskScore}/100 (${item.riskTier})`);console.log(`Premium Modifier: ${item.premiumModifier}x`);console.log(`Climate Trend: ${item.climateTrajectory.trendDirection}`);console.log(`Signals: ${item.allSignals.join("; ")}`);}
cURL
# Start the actor runcurl -X POST "https://api.apify.com/v2/acts/ryanclinton~insurance-risk-assessment/runs?token=YOUR_API_TOKEN" \-H "Content-Type: application/json" \-d '{"location": "Miami, FL", "propertyType": "commercial", "coverageType": "property"}'# Fetch results (replace DATASET_ID from the run response above)curl "https://api.apify.com/v2/datasets/DATASET_ID/items?token=YOUR_API_TOKEN&format=json"
How Insurance Risk Assessment works
Phase 1: Geocoding and input preparation
The actor calls the Nominatim geocoder via the ryanclinton/nominatim-geocoder sub-actor, which converts the input location string into latitude/longitude coordinates. If coordinates are returned, all subsequent sub-actor calls include both the coordinates and the original text query, enabling radius-based lookups (OpenAQ uses a 25km radius, crime data uses the lat/lon directly). If geocoding returns no result, the actor falls back to text-query mode for all seven remaining sources, ensuring an assessment is produced even for non-standard location strings.
Phase 2: Parallel data collection from 7 sources
Using Promise.allSettled, the actor calls all seven data sources simultaneously rather than sequentially — cutting total data collection time from 7-14 minutes to 1-3 minutes. The sources and their roles:
- FEMA Disaster Search — retrieves major disaster declarations by location for historical peril scoring and disaster-frequency trend analysis
- USGS Earthquake Search — returns earthquakes with magnitude 3.0+ near the target coordinates, with each event's magnitude used for weighted seismic scoring
- NOAA Weather Alerts — returns active weather alerts with severity levels (EXTREME, SEVERE, MODERATE) for real-time hazard exposure
- UK Flood Warnings — retrieves active flood warnings with severity levels for flood peril scoring
- UK Police Crime Data — returns crime records categorized into violent, property, and anti-social offense types
- OpenAQ Air Quality — retrieves sensor readings within a 25km radius, measuring PM2.5, PM10, NO2, SO2, O3, and CO against WHO guideline thresholds
- UK Land Registry — retrieves recent property transaction records for contextual property market data
Promise.allSettled (not Promise.all) ensures that a single failed or slow sub-actor does not abort the entire assessment — failed sources return empty arrays, and the scoring models receive zero contribution from those dimensions.
Phase 3: Four-model scoring and composite synthesis
The generateUnderwritingBrief function runs all four scoring models against the collected data and synthesizes a composite score using fixed weights: Composite Peril (35%), Climate Trajectory (25%), Crime Exposure (20%), and Environmental Contamination (20%). Each model returns a 0-100 score and a set of human-readable signal strings for notable findings.
The climate trajectory model is the most computationally distinctive: it groups FEMA disaster records by year, calculates the average declaration rate for the most recent decade versus the prior decade, and derives an acceleration ratio. Scores above 1.5 classify the trend as RAPIDLY_WORSENING. This ratio is applied as an exponential multiplier at three time horizons: score * accelerationRatio^0.25 (5yr), score * accelerationRatio^0.5 (10yr), and score * accelerationRatio^1.0 (25yr).
Phase 4: Underwriting brief assembly and output
The final output assembles the composite score, risk tier, premium modifier, all four sub-model results, merged signals list, and auto-generated underwriting notes into a single JSON document pushed to the Apify dataset. Underwriting notes are conditionally generated: DECLINE-tier results receive a senior underwriter referral flag, high peril scores trigger exclusion/sublimit guidance, RAPIDLY_WORSENING climate trends trigger a review-interval warning, high violent crime counts trigger a security requirements flag, and multiple pollution exceedances trigger an environmental liability endorsement recommendation.
Tips for best results
-
Use full street addresses for property-level assessments. A specific address like "350 Fifth Avenue, New York, NY 10118" geocodes precisely and produces tighter radius queries for air quality and crime data than a city name alone.
-
Run city-level assessments for regional screening first. Before committing to property-level analysis, screen the city (e.g., "Houston, TX") to see if the climate trajectory and composite peril score warrant deeper investigation. This costs the same and takes the same time.
-
Treat the premium modifier as a starting adjustment, not a final rate. The modifier is calibrated on publicly available data. It does not account for property construction class, occupancy, loss history, or coverage limits — all of which belong in your full underwriting file.
-
For UK locations, expect richer crime and flood data. The UK Police API and UK Flood Warnings return more granular data for UK addresses than for US or international locations, where crime data falls back to empty results. Plan data source coverage accordingly when comparing UK vs. non-UK portfolio locations.
-
Use the API for portfolio-scale analysis. Running 50 assessments through the API in parallel costs the same per-assessment as running them individually, but finishes in a fraction of the time. Use the Python or JavaScript examples above and run assessments concurrently across your property list.
-
Check
dataSourcesin the output to validate coverage. ThedataSourcesfield shows exactly how many records each sub-actor returned. A zero for a major source (e.g.,"fema-disaster-search": 0) means that location may not have enough data for a reliable peril score — flag those assessments for manual review. -
Schedule monthly re-assessments for active policies. Climate trajectory and crime exposure can shift meaningfully year-over-year. Use Apify's scheduler to re-run assessments on renewal dates and detect deteriorating risk profiles before they generate claims.
Combine with other Apify actors
| Actor | How to combine |
|---|---|
| FEMA Disaster Search | Pull the full historical disaster record for a region to complement the summary counts in the insurance brief |
| NOAA Weather Alerts | Monitor real-time weather alerts independently during catastrophe events to track aggregate portfolio exposure |
| OpenAQ Air Quality | Run deeper air quality analysis for environmental liability underwriting beyond the summary in the risk brief |
| Company Deep Research | Pair with a company research report to combine location risk assessment with business risk assessment for commercial lines |
| Website Change Monitor | Monitor changes to state insurance commission pages or FEMA flood map service center for regulatory updates |
| B2B Lead Qualifier | Score potential commercial insurance clients by combining their location risk tier with firmographic lead qualification |
| HubSpot Lead Pusher | Push risk assessment results as CRM records for underwriting pipeline management |
Limitations
- US and UK data sources only for most dimensions. FEMA and NOAA data are US-specific. UK Police, UK Flood Warnings, and UK Land Registry are UK-specific. Assessments for locations outside the US and UK will have reduced data coverage, which the
dataSourcesfield will reflect as zero record counts. - USGS earthquake and OpenAQ air quality are global. These two sources work for any location with sensors or seismic monitoring — coverage varies by region.
- No real-time claims data. The actor uses public hazard and environmental data, not loss history, claims frequency, or actuarial loss ratios. It supplements but does not replace underwriting file review.
- Crime data is UK Police only. US crime statistics are not included. US addresses will return zero crime records, resulting in a crime exposure score of 0 — which reflects missing data, not low crime.
- Property data is UK Land Registry only. Property transaction data is only available for UK locations. Non-UK addresses return empty property data arrays.
- Geocoding accuracy varies. Unusual location strings or non-Latin place names may fail to geocode, falling back to text queries. Check
coordinates: nullin the output to detect this. - Not a licensed actuarial product. The premium modifier is a data-driven indicator, not a certified actuarial rate. It should inform underwriting judgment, not replace it.
- Sub-actor availability. If an upstream public data source is temporarily down, that data source will return zero records. Run the actor again or check the
dataSourcesfield to identify gaps.
Integrations
- Zapier — trigger a risk assessment automatically when a new property address is added to a spreadsheet or CRM record, then route DECLINE-tier results to a Slack channel for immediate review
- Make — build multi-step workflows that run assessments on new policy applications and push results to your underwriting management system
- Google Sheets — export assessment results to a shared spreadsheet for underwriting team review and comparative portfolio analysis
- Apify API — integrate location risk scoring directly into your policy management or rating system via HTTP calls from any language
- Webhooks — receive a POST request with the full assessment JSON the moment a run completes, enabling real-time risk decisioning in your own application
- LangChain / LlamaIndex — feed assessment output into an LLM pipeline for natural-language underwriting memo generation or risk narrative summarization
Troubleshooting
-
Assessment returns zero for crime and property data despite a US address — The crime exposure and property data models use UK-specific data sources (UK Police and UK Land Registry). US addresses will return zeros for these dimensions. The composite score will reflect only peril, climate, and environmental contributions. This is expected behavior, not an error — check the
dataSourcesfield to confirm. -
Composite risk score seems low despite a known high-risk location — If the
dataSourcesfield shows zeros for several sources (especially FEMA and NOAA for US locations), the geocoding step may have failed to produce coordinates. Try a more specific location string — include a full postal code or use the city-and-state format ("New Orleans, LA") rather than informal place names. -
Run timing out or taking more than 5 minutes — The actor calls 8 sub-actors in parallel. If multiple sub-actors experience slow response times from upstream APIs, total run time can extend to 5-8 minutes. Consider increasing the actor timeout in the settings tab if this occurs frequently.
-
Climate trajectory shows STABLE despite historically active disaster area — The acceleration ratio compares the past 10 years against the prior 10 years. If FEMA disaster records are sparse (fewer than 5-10 total), the decade comparison may not have enough data to detect acceleration. The
dataSources.fema-disaster-searchcount will indicate record availability. -
Premium modifier appears at maximum (2.50x) — A composite risk score of 100 produces a premium modifier of 2.50x via the formula
0.8 + 1.0 * 1.7. This typically indicates a location with extensive FEMA history, active flood warnings, high crime exposure, and poor air quality all simultaneously. Review each sub-model score individually and check the underwriting notes for specific action items.
Responsible use
- This actor only accesses publicly available data from government and open-data sources (FEMA, USGS, NOAA, UK Police, UK Environment Agency, OpenAQ, UK Land Registry).
- Risk assessments are based on aggregate historical and environmental data — they do not involve personal data about individuals at a location.
- Comply with fair lending, fair housing, and anti-discrimination laws when using risk assessments in coverage or pricing decisions.
- Do not use location risk scores as a proxy for demographic characteristics or as a basis for unlawful coverage discrimination.
- For guidance on web scraping and data use legality, see Apify's guide.
FAQ
How is the insurance risk composite score calculated?
The composite score (0-100) is a weighted average of four sub-model scores: Composite Peril (35%), Climate Trajectory (25%), Crime Exposure (20%), and Environmental Contamination (20%). Each sub-model independently scores its data dimension on a 0-100 scale, and the weighted sum becomes the composite risk score. A score of 62, for example, means the location has above-average risk across multiple dimensions — scoring somewhere between STANDARD and SUBSTANDARD territory.
What does the insurance risk tier mean for underwriting decisions?
PREFERRED (0-24) indicates minimal multi-peril risk and typically supports a premium discount of up to 20%. STANDARD (25-49) is average risk requiring no special treatment. SUBSTANDARD (50-74) indicates elevated risk and triggers surcharges of 70-110% above standard rates. DECLINE (75+) means the composite risk exceeds standard underwriting thresholds — the actor generates a "REFER TO SENIOR UNDERWRITER" note and a premium modifier above 2.1x.
How accurate is the insurance risk assessment?
Accuracy depends on data coverage for the assessed location. US locations benefit from comprehensive FEMA and NOAA datasets but receive no crime data (UK-only). UK locations receive crime, flood, and property data but have no FEMA or NOAA coverage. Global locations receive seismic (USGS) and air quality (OpenAQ) data only. Check the dataSources field in every output to understand which dimensions contributed to the score.
How do the 5-year, 10-year, and 25-year climate projections work?
The actor groups FEMA disaster declarations by calendar year, calculates the average annual frequency for the most recent 10-year period versus the prior 10-year period, and derives an acceleration ratio. This ratio is applied as a fractional exponent to project the current climate trajectory score forward: currentScore * ratio^0.25 (5yr), currentScore * ratio^0.5 (10yr), and currentScore * ratio^1.0 (25yr). A ratio of 2.0 means disaster frequency doubled in the last decade — classified as RAPIDLY_WORSENING.
Does insurance risk assessment work for locations outside the US and UK?
Partially. USGS earthquake data and OpenAQ air quality data work globally wherever sensors and seismic monitoring exist. FEMA and NOAA are US-specific. UK Police, UK Flood Warnings, and UK Land Registry are UK-specific. For non-US, non-UK locations, expect a composite score driven primarily by seismic and air quality data, with peril, crime, and property dimensions at zero. The dataSources field will show which sources returned data.
How long does a typical insurance risk assessment take?
Most runs complete in 2-4 minutes. The actor queries all 8 data sources in parallel, so total time is determined by the slowest responding source rather than the sum of all response times. FEMA searches on well-documented disaster areas can take 60-90 seconds. If a run takes more than 6 minutes, check whether a specific sub-actor is timing out in the actor log.
Can I run insurance risk assessments in bulk for a property portfolio?
Yes. Use the Apify API to trigger runs programmatically for each address in your portfolio. Run them concurrently from your own application — each actor run is independent. For 100 properties, running 10 concurrent API calls completes the full portfolio assessment in approximately 5-10 minutes. See the Python code example above for the pattern.
How is this different from commercial insurtech data providers like Verisk or Cape Analytics?
Commercial platforms like Verisk, LexisNexis Risk Solutions, or Cape Analytics offer proprietary models built on licensed datasets including satellite imagery, claims history, and catastrophe model outputs. This actor uses only public open-data sources and costs a fraction of the price — roughly $0.40 per assessment versus $500-2,000 per month for API subscriptions. It is best used for pre-screening, portfolio triage, and teams that cannot justify enterprise insurtech licensing costs.
What underwriting notes does the actor generate?
Underwriting notes are conditionally generated based on score thresholds: (1) "REFER TO SENIOR UNDERWRITER" for DECLINE-tier results (composite score 75+), (2) dominant peril exclusion/sublimit guidance when peril score exceeds 60, (3) climate review interval warning for RAPIDLY_WORSENING trajectory, (4) security requirements recommendation when violent crime count exceeds 10, and (5) environmental liability endorsement recommendation when 3 or more pollutants exceed WHO guidelines.
Is it legal to use public government data for insurance underwriting?
Yes. This actor accesses openly published government data from FEMA, USGS, NOAA, UK Police, UK Environment Agency, and OpenAQ — all of which are intended for public use. The data is used to assess location hazard characteristics, not to profile individuals. Ensure your use of location risk assessments complies with applicable fair lending, fair housing, and anti-discrimination regulations in your jurisdiction. For general guidance on web data use, see Apify's guide.
Can I schedule this actor to monitor location risk over time?
Yes. Apify's built-in scheduler lets you run the actor on a fixed interval — daily, weekly, or monthly. For policy renewal workflows, schedule a re-assessment 30-60 days before each renewal date and compare the composite risk score and climate trajectory to the prior-year result to detect material changes in risk profile.
What happens if one of the data sources is unavailable during a run?
The actor uses Promise.allSettled to call all 7 data sources in parallel, which means a failed or unavailable source does not abort the overall assessment. The failed source returns an empty array, contributing zero to that scoring model dimension. The dataSources field in the output will show a count of zero for the unavailable source, indicating which dimension has missing data.
Help us improve
If you encounter issues, you can help us debug faster by enabling run sharing in your Apify account:
- Go to Account Settings > Privacy
- Enable Share runs with public Actor creators
This lets us see your run details when something goes wrong, so we can fix issues faster. Your data is only visible to the actor developer, not publicly.
Support
Found a bug or have a feature request? Open an issue in the Issues tab on this actor's page. For custom solutions or enterprise integrations, reach out through the Apify platform.