# University Research Report (`ryanclinton/university-research-report`) Actor

Generate a comprehensive university research intelligence report by querying 8 academic data sources in parallel.

- **URL**: https://apify.com/ryanclinton/university-research-report.md
- **Developed by:** [ryan clinton](https://apify.com/ryanclinton) (community)
- **Categories:** AI, Developer tools
- **Stats:** 2 total users, 1 monthly users, 0.0% runs succeeded, NaN bookmarks
- **User rating**: No ratings yet

## Pricing

$300.00 / 1,000 analysis runs

This Actor is paid per event. You are not charged for the Apify platform usage, but only a fixed price for specific events.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## University Research Report

University Research Report queries 8 academic databases in parallel to produce a structured intelligence brief on any university, research institution, or lab group. Corporate development teams, technology licensing managers, and venture capital analysts use it to evaluate research commercialization potential, identify breakthrough technology areas, and benchmark lab strength — without manual database trawling.

Each run calls OpenAlex, arXiv, USPTO, EPO, NIH Reporter, Grants.gov, ORCID, and the EPO patent database simultaneously, then runs four scoring models — commercialization readiness, research hotspot detection, lab intelligence profiling, and technology maturity assessment — before assembling a composite score with an actionable engagement verdict. No code required. Results download as JSON, CSV, or Excel in under 90 seconds.

### What data can you extract?

| Data Point | Source | Example |
|---|---|---|
| 📊 **Composite score** | All 8 sources | `72` (0–100 scale) |
| 🏛️ **Engagement verdict** | Scoring engine | `PARTNER` |
| 🔬 **Commercialization readiness level** | USPTO + EPO + NIH + Grants.gov | `NEAR_MARKET` |
| 📈 **Publication-to-patent conversion ratio** | OpenAlex + USPTO | `0.33` (33%) |
| 🚀 **Research hotspot level** | arXiv + OpenAlex + ORCID | `HOT` |
| 🏆 **Lab strength classification** | ORCID + NIH + USPTO | `WORLD_CLASS` |
| ⚙️ **TRL estimate (1–9)** | Patents + publications + grants | `5` |
| 💰 **Total grant funding** | NIH Reporter + Grants.gov | `$18.2M` |
| 👩‍🔬 **Principal investigator count** | ORCID | `7` |
| 📄 **Top 10 publications** | OpenAlex | Title, journal, citation count |
| 🗂️ **Top 10 patents** | USPTO + EPO | Title, status, filing date |
| 📋 **Top 10 grants** | NIH Reporter + Grants.gov | Title, agency, award amount |
| 🔭 **Top 10 preprints** | arXiv | Title, published date |
| 🔍 **All actionable signals** | All sources | `"8 preprints in 2025+ — high research velocity"` |
| 💡 **Recommendations** | Scoring engine | `"Initiate tech transfer discussions"` |

### Why use University Research Report?

Manually profiling a university's research portfolio means visiting OpenAlex, PubMed, Google Scholar, USPTO, NIH Reporter, Grants.gov, EPO, and ORCID separately — building queries for each, reconciling inconsistent records, and spending 4–8 hours before you have a first draft. At $150/hour analyst cost, that is $600–$1,200 per institution. And the data is out of date by the time the draft lands.

This actor automates the entire intelligence-gathering and scoring pipeline in a single click. Corporate development teams get a structured verdict with supporting evidence. Venture analysts compare multiple institutions in one sitting. Technology licensing managers identify near-market IP pipelines without a research subscription.

Beyond the data itself, the Apify platform adds:

- **Scheduling** — run quarterly benchmarks automatically to track institutional momentum over time
- **API access** — trigger reports from Python, JavaScript, or any HTTP client inside your existing CI or deal-flow pipeline
- **Dataset export** — push results directly to Google Sheets, Airtable, or any BI tool via the platform's native integrations
- **Monitoring** — receive Slack or email alerts when a scheduled run fails or produces unexpected output
- **Integrations** — connect to Zapier, Make, HubSpot, or webhooks to trigger downstream workflows the moment a report completes

### Features

- **8 data sources queried in parallel** — OpenAlex (publications and research entities), arXiv (preprints), USPTO (US patents), EPO (European patents), NIH Reporter (grant funding), Grants.gov (federal opportunities), and ORCID (researcher profiles) all called simultaneously, completing in under 90 seconds
- **Commercialization readiness scoring (0–100, 30% composite weight)** — computes publication-to-patent conversion ratio; a ratio above 30% triggers the "strong commercialization pipeline" signal; evaluates patent quality via recency (2024+) and granted vs. application status; measures grant funding volume with TRL keyword classification across high (clinical trial, FDA, commercialize, spinoff), medium (translational, proof of concept, feasibility), and low (fundamental, theoretical, basic research) categories
- **Research hotspot detection (0–100, 20% composite weight)** — measures arXiv preprint velocity as a year-over-year ratio (recent 2025+ preprints vs 2024 baseline); flags acceleration when the ratio exceeds 1.5×; scores citation momentum from OpenAlex with bonus points for papers above 50 citations; calculates researcher density from ORCID; cross-source confirmation adds up to 20 bonus points when multiple databases corroborate activity
- **Lab intelligence profiling (0–100, 25% composite weight)** — measures PI productivity as works-per-researcher from ORCID; evaluates grant portfolio strength using a simplified funding agency diversity metric (HHI); counts IP output across USPTO and EPO combined; assesses publication breadth by counting distinct journal venues
- **Technology maturity assessment (0–100, 25% composite weight)** — performs TRL keyword classification on all patent, publication, and grant titles; calculates the ratio of granted patents to total applications; identifies landmark papers (100+ citations); detects SBIR/STTR funding stage signals as explicit tech-transfer pathway indicators
- **Composite scoring with 5-level verdict system** — weighted average across all four models; scores 75+ yield `ACQUIRE_NOW`; 55–74 yield `PARTNER`; 35–54 yield `MONITOR`; 15–34 yield `TOO_EARLY`; below 15 yield `PASS`
- **Two scoring overrides** — world-class labs (lab score ≥ 80) with high commercialization (score ≥ 60) are elevated to `ACQUIRE_NOW` regardless of composite; mature tech (TRL ≥ 60) paired with weak labs (score < 30) is downgraded to `MONITOR` to prevent false positives
- **Narrative signal list** — each scoring model emits human-readable sentences explaining what drove the score (e.g., "High pub-to-patent conversion (33%) — strong commercialization pipeline")
- **Actionable recommendations** — six recommendation templates are conditionally emitted based on score thresholds, including PI retention risk warnings, SBIR/STTR pathway advice, and research sponsorship suggestions for pre-commercial stage institutions
- **Field and department scoping** — append a research field and/or department name to the query, narrowing all 8 data sources to the specified technology area
- **Top-10 item lists per source** — returns the top 10 publications, patents, grants, researchers, and preprints as structured arrays for downstream enrichment pipelines
- **Up to 1,000 records per data source** — the actor fetches up to 1,000 items from each sub-actor dataset before scoring, giving the models sufficient signal on large, active institutions

### Use cases for university research intelligence reports

#### Corporate development and M&A technology scouting

Corporate development teams evaluating academic spinout targets need a ranked shortlist before committing analyst time to due diligence. Run this actor against a portfolio of institutions in the target research field — quantum computing, CRISPR gene editing, solid-state battery technology — and sort by composite score. Institutions scoring above 75 with a `NEAR_MARKET` commercialization level represent the strongest licensing or acquisition candidates and warrant a direct outreach to the technology transfer office.

#### Venture capital and university spinout evaluation

Early-stage VC funds investing in deep tech spinouts need to evaluate whether a founding lab has the IP depth, funding history, and publication velocity to support a credible commercialization story. The lab intelligence and tech maturity scores provide a quantitative starting point that replaces hours of manual database searching. A world-class lab with an `ACQUIRE_NOW` verdict and 5+ SBIR/STTR grants is a strong signal that the technology has already passed government-funded validation.

#### Technology licensing and IP partnership sourcing

Licensing managers at pharma, defense, and industrial companies scan hundreds of institutions annually for licensable patents. This actor surfaces institutions with high publication-to-patent conversion ratios (above 30%), recent granted patents (2024+), and explicit SBIR/STTR transfer pathways — exactly the profile of a lab actively moving IP toward commercialization, rather than one that files patents defensively.

#### Government science agency grant allocation

Program officers at NIH, NSF, DARPA, and DOE need to map institutional research strengths across departments to inform grant priorities. Run this actor on key research institutions within a program area and use the dataSources counts and grant portfolio signals to identify which labs are underfunded relative to their publication output — a potential grant allocation opportunity.

#### University research benchmarking and internal audit

VP Research offices and technology transfer teams can benchmark their own institution's commercialization performance against peer institutions. Run the actor with a specific department against three competitor universities and compare `conversionRatio`, `grantCount`, and `patentCount` fields directly. Departments with low patent conversion despite high publication output represent untapped IP value that may justify dedicated commercialization support.

#### Academic partnerships and sponsored research sourcing

Companies building research partnerships and sponsored research agreements need to identify labs with the right technical depth, funding stability, and team size to deliver on a multi-year program. The lab intelligence score's PI productivity and funding agency diversity metrics directly answer whether a lab has the infrastructure to absorb and execute sponsored research at scale.

### How to generate a university research report

1. **Enter the institution name** — Type the name of the university or research institution in the Institution field. Examples: "MIT", "Stanford University", "ETH Zurich", "Max Planck Institute", "Johns Hopkins".
2. **Optionally narrow the scope** — Enter a Research Field such as "CRISPR", "quantum computing", or "battery technology" to focus all 8 data sources on that technology area. Add a Department such as "Biomedical Engineering" or "Computer Science" to further narrow results.
3. **Click Start and wait up to 90 seconds** — The actor calls all 8 data sources in parallel. Typical run time is 60–90 seconds depending on the institution's data volume across sources.
4. **Download results** — Click the Dataset tab and export as JSON, CSV, or Excel. The composite score, verdict, and all sub-scores are in the first record.

### Input parameters

| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| `institution` | string | Yes | — | University or research institution name. Examples: "MIT", "Stanford University", "Max Planck Institute" |
| `field` | string | No | — | Research field or technology area to scope all queries. Examples: "quantum computing", "CRISPR", "battery technology" |
| `department` | string | No | — | Department or lab name to narrow the scope. Examples: "Computer Science", "Biomedical Engineering" |

#### Input examples

**Broad institution scan — most common use case:**
```json
{
  "institution": "Stanford University"
}
````

**Focused field and department report:**

```json
{
  "institution": "MIT",
  "field": "quantum computing",
  "department": "Computer Science"
}
```

**International institution scan:**

```json
{
  "institution": "ETH Zurich",
  "field": "battery technology"
}
```

#### Input tips

- **Start with institution only** — a broad scan first gives you the institution's overall profile across all research fields; narrow with `field` only after seeing the initial score.
- **Use canonical institution names** — "MIT" returns more results than "Massachusetts Institute of Technology" in most databases; test both if the first run returns thin data.
- **Combine with field for deal flow** — when scanning a list of institutions for a specific technology, add the same `field` value to every run so composite scores are comparable across institutions.
- **Department scoping reduces noise** — if an institution is large and you care only about one department, the `department` parameter prevents unrelated grants and patents from inflating (or diluting) the score.

### Output example

```json
{
  "institution": "MIT",
  "field": "quantum computing",
  "department": null,
  "query": "MIT quantum computing",
  "generatedAt": "2026-03-20T11:14:22.000Z",
  "compositeScore": 72,
  "verdict": "PARTNER",
  "recommendations": [
    "High commercialization readiness — initiate tech transfer or licensing discussions",
    "Research area is rapidly accelerating — first-mover advantage available",
    "Large research group — evaluate key PI retention risk before engagement",
    "TRL 5 — technology suitable for pilot or demonstration programs"
  ],
  "allSignals": [
    "High pub-to-patent conversion (33%) — strong commercialization pipeline",
    "5 recent patents (2024+) — active IP pipeline",
    "2 granted patents — validated IP",
    "$12.5M in grant funding — well-funded research program",
    "8 preprints in 2025+ — high research velocity",
    "Preprint acceleration 1.8x — field gaining momentum",
    "4 highly-cited papers (50+) — research impact cluster",
    "7 principal investigators — substantial research group",
    "$18.2M grant portfolio — major research program",
    "Funding from 4 agencies — diversified support base",
    "Published across 12 venues — broad research impact",
    "3 applied/commercial-stage outputs — near-market technology",
    "2 SBIR/STTR grants — active tech transfer pathway"
  ],
  "scoring": {
    "commercializationReadiness": {
      "score": 68,
      "patentCount": 14,
      "publicationCount": 42,
      "conversionRatio": 0.33,
      "readinessLevel": "NEAR_MARKET",
      "signals": [
        "High pub-to-patent conversion (33%) — strong commercialization pipeline",
        "5 recent patents (2024+) — active IP pipeline",
        "$12.5M in grant funding — well-funded research program"
      ]
    },
    "researchHotSpots": {
      "score": 75,
      "preprintVelocity": 8,
      "citationAcceleration": 35,
      "hotspotLevel": "HOT",
      "signals": [
        "8 preprints in 2025+ — high research velocity",
        "Preprint acceleration 1.8x — field gaining momentum",
        "4 highly-cited papers (50+) — research impact cluster",
        "Average 35 citations — strong field attention"
      ]
    },
    "labIntelligence": {
      "score": 82,
      "piCount": 7,
      "grantCount": 12,
      "patentCount": 14,
      "labStrength": "WORLD_CLASS",
      "signals": [
        "7 principal investigators — substantial research group",
        "$18.2M grant portfolio — major research program",
        "Funding from 4 agencies — diversified support base",
        "Published across 12 venues — broad research impact"
      ]
    },
    "techMaturity": {
      "score": 55,
      "trlEstimate": 5,
      "patentMaturity": 14,
      "publicationMaturity": 12,
      "maturityLevel": "PROTOTYPE",
      "signals": [
        "3 applied/commercial-stage outputs — near-market technology",
        "50% patent grant rate — proven IP",
        "2 landmark papers (100+ citations) — technology validated by community",
        "2 SBIR/STTR grants — active tech transfer pathway"
      ]
    }
  },
  "dataSources": {
    "openalexPublications": 25,
    "openalexResearch": 17,
    "usPatents": 8,
    "epoPatents": 6,
    "nihGrants": 7,
    "federalGrants": 5,
    "researchers": 7,
    "arxivPreprints": 8
  },
  "topPublications": [
    {
      "title": "Fault-Tolerant Quantum Error Correction in Superconducting Qubits",
      "journal": "Nature Physics",
      "cited_by_count": 142,
      "publication_year": 2024
    }
  ],
  "topPatents": [
    {
      "title": "Qubit Coherence Enhancement via Phononic Crystal Isolation",
      "status": "granted",
      "filing_date": "2024-02-14",
      "assignee": "Massachusetts Institute of Technology"
    }
  ],
  "topGrants": [
    {
      "title": "Scalable Quantum Computing Architectures for Near-Term Applications",
      "agency": "NSF",
      "totalCost": 2850000,
      "activity_code": "R01"
    }
  ],
  "topResearchers": [
    {
      "name": "Dr. Priya Mehta",
      "orcid": "0000-0002-3451-8812",
      "works_count": 87,
      "affiliation": "MIT Laboratory for Quantum Information"
    }
  ],
  "topPreprints": [
    {
      "title": "Logical Qubit Fidelity Above 99.9% Using Surface Code Decoders",
      "published": "2025-02-08",
      "authors": ["Mehta, P.", "Vance, R.", "Chen, L."]
    }
  ]
}
```

### Output fields

| Field | Type | Description |
|---|---|---|
| `institution` | string | Institution name as provided in input |
| `field` | string | null | Research field if provided; null otherwise |
| `department` | string | null | Department if provided; null otherwise |
| `query` | string | Combined query string sent to all data sources |
| `generatedAt` | string | ISO 8601 timestamp of report generation |
| `compositeScore` | number | Weighted composite score 0–100 across all four models |
| `verdict` | string | Engagement verdict: `ACQUIRE_NOW`, `PARTNER`, `MONITOR`, `TOO_EARLY`, or `PASS` |
| `recommendations` | string\[] | Conditionally-emitted actionable recommendations (up to 6) |
| `allSignals` | string\[] | All human-readable signal strings from all four scoring models |
| `scoring.commercializationReadiness.score` | number | Commercialization readiness score 0–100 |
| `scoring.commercializationReadiness.patentCount` | number | Total patents found across USPTO and EPO |
| `scoring.commercializationReadiness.publicationCount` | number | Total publications found across OpenAlex sources |
| `scoring.commercializationReadiness.conversionRatio` | number | Publication-to-patent ratio (patents ÷ publications) |
| `scoring.commercializationReadiness.readinessLevel` | string | `PRE_DISCOVERY`, `EARLY_STAGE`, `DEVELOPING`, `NEAR_MARKET`, or `MARKET_READY` |
| `scoring.commercializationReadiness.signals` | string\[] | Evidence strings that drove the commercialization score |
| `scoring.researchHotSpots.score` | number | Research hotspot score 0–100 |
| `scoring.researchHotSpots.preprintVelocity` | number | Count of arXiv preprints from 2025+ |
| `scoring.researchHotSpots.citationAcceleration` | number | Average citations per OpenAlex paper |
| `scoring.researchHotSpots.hotspotLevel` | string | `DORMANT`, `EMERGING`, `ACTIVE`, `HOT`, or `BREAKTHROUGH` |
| `scoring.researchHotSpots.signals` | string\[] | Evidence strings for hotspot detection |
| `scoring.labIntelligence.score` | number | Lab intelligence score 0–100 |
| `scoring.labIntelligence.piCount` | number | Count of ORCID-registered researchers found |
| `scoring.labIntelligence.grantCount` | number | Total grants found across NIH and Grants.gov |
| `scoring.labIntelligence.patentCount` | number | Total patents across USPTO and EPO |
| `scoring.labIntelligence.labStrength` | string | `UNKNOWN`, `NASCENT`, `ESTABLISHED`, `PROMINENT`, or `WORLD_CLASS` |
| `scoring.labIntelligence.signals` | string\[] | Evidence strings for lab intelligence profile |
| `scoring.techMaturity.score` | number | Technology maturity score 0–100 |
| `scoring.techMaturity.trlEstimate` | number | Estimated Technology Readiness Level 1–9 |
| `scoring.techMaturity.patentMaturity` | number | Sub-score for patent granted-to-application ratio (max 25) |
| `scoring.techMaturity.publicationMaturity` | number | Sub-score for landmark citation papers (max 20) |
| `scoring.techMaturity.maturityLevel` | string | `BASIC_RESEARCH`, `PROOF_OF_CONCEPT`, `PROTOTYPE`, `DEMONSTRATION`, or `DEPLOYMENT_READY` |
| `scoring.techMaturity.signals` | string\[] | Evidence strings for tech maturity assessment |
| `dataSources.openalexPublications` | number | Record count from OpenAlex publications query |
| `dataSources.openalexResearch` | number | Record count from OpenAlex research entities query |
| `dataSources.usPatents` | number | Record count from USPTO patent search |
| `dataSources.epoPatents` | number | Record count from EPO patent search |
| `dataSources.nihGrants` | number | Record count from NIH Reporter |
| `dataSources.federalGrants` | number | Record count from Grants.gov |
| `dataSources.researchers` | number | Record count from ORCID researcher search |
| `dataSources.arxivPreprints` | number | Record count from arXiv preprint search |
| `topPublications` | object\[] | Up to 10 publications from OpenAlex |
| `topPatents` | object\[] | Up to 10 patents from USPTO |
| `topEpoPatents` | object\[] | Up to 10 patents from EPO |
| `topGrants` | object\[] | Up to 10 grants from NIH Reporter |
| `topFederalOpportunities` | object\[] | Up to 10 records from Grants.gov |
| `topResearchers` | object\[] | Up to 10 researcher profiles from ORCID |
| `topPreprints` | object\[] | Up to 10 preprints from arXiv |

### How much does it cost to generate a university research report?

University Research Report uses **pay-per-run pricing — approximately $0.10 per report**. Platform compute costs are included. Each run calls 8 sub-actors in parallel; the cost reflects the aggregated API calls and compute across all sources.

| Scenario | Reports | Cost per report | Total cost |
|---|---|---|---|
| Quick test | 1 | $0.10 | $0.10 |
| Small batch | 10 | $0.10 | $1.00 |
| Medium batch | 50 | $0.10 | $5.00 |
| Large batch | 200 | $0.10 | $20.00 |
| Enterprise | 1,000 | $0.10 | $100.00 |

You can set a **maximum spending limit** per run to control costs. The actor stops when your budget is reached.

Apify's free tier includes $5 of monthly credits — enough for approximately 50 reports per month at no charge. Compare this to research intelligence platforms like Dimensions ($1,500+/year) or SciVal ($2,000+/year) — with this actor, most users spend $1–$10/month with no subscription commitment.

### Generate university research reports using the API

#### Python

```python
from apify_client import ApifyClient

client = ApifyClient("YOUR_API_TOKEN")

run = client.actor("ryanclinton/university-research-report").call(run_input={
    "institution": "MIT",
    "field": "quantum computing",
    "department": "Computer Science"
})

for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(f"Institution: {item['institution']}")
    print(f"Composite Score: {item['compositeScore']}/100")
    print(f"Verdict: {item['verdict']}")
    print(f"Commercialization Level: {item['scoring']['commercializationReadiness']['readinessLevel']}")
    print(f"Lab Strength: {item['scoring']['labIntelligence']['labStrength']}")
    print(f"TRL Estimate: {item['scoring']['techMaturity']['trlEstimate']}")
    for signal in item.get("allSignals", []):
        print(f"  Signal: {signal}")
```

#### JavaScript

```javascript
import { ApifyClient } from "apify-client";

const client = new ApifyClient({ token: "YOUR_API_TOKEN" });

const run = await client.actor("ryanclinton/university-research-report").call({
    institution: "MIT",
    field: "quantum computing",
    department: "Computer Science"
});

const { items } = await client.dataset(run.defaultDatasetId).listItems();
for (const item of items) {
    console.log(`Institution: ${item.institution}`);
    console.log(`Composite Score: ${item.compositeScore}/100`);
    console.log(`Verdict: ${item.verdict}`);
    console.log(`Lab Strength: ${item.scoring.labIntelligence.labStrength}`);
    console.log(`TRL Estimate: ${item.scoring.techMaturity.trlEstimate}`);
    console.log(`Recommendations:`);
    item.recommendations.forEach(r => console.log(`  - ${r}`));
}
```

#### cURL

```bash
## Start the actor run
curl -X POST "https://api.apify.com/v2/acts/ryanclinton~university-research-report/runs?token=YOUR_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"institution": "MIT", "field": "quantum computing", "department": "Computer Science"}'

## Fetch results (replace DATASET_ID from the run response above)
curl "https://api.apify.com/v2/datasets/DATASET_ID/items?token=YOUR_API_TOKEN&format=json"
```

### How University Research Report works

#### Phase 1 — Parallel data collection across 8 sources

The actor constructs a composite query string from the three input parameters: institution name, optional department, and optional research field. All three are concatenated to form a single query (e.g., "MIT Computer Science quantum computing"). This query is passed simultaneously to all 8 sub-actors using `Promise.all`, with each sub-actor allocated 512 MB of memory and a 120-second timeout. The sub-actors cover: OpenAlex publications, OpenAlex research entities, USPTO patent search, EPO patent search, NIH Reporter grants, Grants.gov federal opportunities, ORCID researcher profiles, and arXiv preprints. Each sub-actor returns up to 1,000 records into a named array in the shared data object.

#### Phase 2 — Four independent scoring models

The four scoring functions in `scoring.ts` operate on the same shared data object and run sequentially after data collection. Each function scans the relevant source arrays and accumulates sub-scores:

**Commercialization readiness** (max 100) — Computes pub-to-patent conversion ratio from OpenAlex and USPTO/EPO counts (max 30 pts); evaluates patent recency (2024+ = 4 pts each) and granted status (B1/B2 kind codes = 3 pts each, max 25 pts); measures grant funding via `log10(totalFunding) × 2` plus TRL keyword match points (max 25 pts); scans all patent and publication titles for TRL\_HIGH, TRL\_MED, and TRL\_LOW keyword lists (max 20 pts). TRL\_HIGH keywords: clinical trial, FDA, commercializ, licens, startup, spinoff, prototype, pilot. TRL\_MED: applied, translational, proof of concept, feasibility, validation. TRL\_LOW: fundamental, theoretical, basic research, exploratory, novel.

**Research hotspot detection** (max 100) — Computes arXiv preprint velocity as a ratio of 2025+ papers to 2024 papers (acceleration flagged at 1.5×); scores OpenAlex citation density with bonus points for papers above 50 citations and a 0.5× multiplier on average citations (max 30 pts each); adds ORCID researcher count at 2 pts per researcher (max 20 pts); adds up to 20 confirmation points when 3 sources all return results.

**Lab intelligence profiling** (max 100) — Sums ORCID works-per-researcher at 0.2× multiplier plus PI count at 3 pts each (max 30 pts); evaluates grant portfolio size and unique funding agency count with a simplified inverse-HHI diversity bonus at 3 agencies (max 25 pts); scores IP volume at 3 pts per patent (max 25 pts); counts distinct journal venue strings across all OpenAlex papers at 2 pts each plus paper count up to 10 (max 20 pts).

**Technology maturity** (max 100) — Classifies every title (patents + papers + grants) into high/medium/low TRL buckets; computes a weighted TRL average (high×3 + med×2 + low×1 ÷ total) that maps to a 1–9 TRL estimate (max 35 pts); scores patent grant ratio at 20 pts × ratio plus 2 pts per patent (max 25 pts); awards 5 pts per paper with 100+ citations (max 20 pts); counts SBIR/STTR keywords in grant titles and activity codes at 5 pts each (max 20 pts).

#### Phase 3 — Composite scoring and verdict generation

The composite score is a weighted average: commercialization (30%) + tech maturity (25%) + lab intelligence (25%) + hotspot (20%). Two override rules apply: a world-class lab (score ≥ 80) with high commercialization (score ≥ 60) elevates the verdict to `ACQUIRE_NOW`; a mature technology (TRL score ≥ 60) paired with a weak lab (score < 30) downgrades to `MONITOR`. Up to 6 recommendation strings are conditionally emitted based on individual sub-score thresholds. All signal strings from all four models are concatenated into the `allSignals` array.

#### Phase 4 — Output assembly and dataset push

The actor assembles a single report object containing the composite score, verdict, recommendations, all signals, all four sub-score objects, `dataSources` record counts, and top-10 item arrays from each source. A single `Actor.pushData(report)` call writes the record to the Apify dataset.

### Tips for best results

1. **Run a broad scan before narrowing.** Query the institution alone first. Review the `dataSources` counts to see which sources returned data. If `usPatents` and `epoPatents` are both zero, the institution may use a different canonical name in patent databases — try adding the full legal name or common abbreviation in a second run.

2. **Use field scoping for deal-flow pipelines.** When comparing multiple institutions against a single technology thesis (e.g., "solid-state batteries"), apply the same `field` value to all runs. Composite scores will be comparable because all 8 sources are scoped to the same query terms.

3. **Interpret signals as evidence, not verdicts.** The `allSignals` array provides the underlying evidence for the composite score. A score of 65 with signals about $20M in NIH funding and 10 SBIR grants is a different profile from a score of 65 driven by citation velocity alone. Read the signals.

4. **Large institutions may under-represent specific labs.** A broad "Harvard University" query returns aggregate data across all departments. Narrow with `department: "Wyss Institute"` or `department: "John A. Paulson School of Engineering"` to isolate specific research groups that might be obscured by the institution's overall size.

5. **Schedule quarterly to track momentum.** A single score is a snapshot. Scheduling the actor to run quarterly on a watchlist of institutions turns it into a momentum tracker — rising hotspot scores or increasing patent counts are early signals of accelerating commercialization activity.

6. **Combine with Patent Search for deep IP analysis.** The top-10 patent arrays in the output are a sampling. For full IP landscape analysis, take the `topPatents` assignee names and pass them to [Patent Search](https://apify.com/ryanclinton/patent-search) directly to retrieve the complete portfolio.

7. **Use the `dataSources` counts to assess data quality.** If `researchers` returns 0, ORCID had no matching profiles — this does not mean the lab is small, just that its researchers may not have ORCID accounts. Adjust your interpretation of the lab intelligence score accordingly.

### Combine with other Apify actors

| Actor | How to combine |
|---|---|
| [Company Deep Research](https://apify.com/ryanclinton/company-deep-research) | After identifying a spinout target via university report, run company deep research on the spinout entity itself to get financial, leadership, and competitive intelligence |
| [B2B Lead Qualifier](https://apify.com/ryanclinton/b2b-lead-qualifier) | Score the technology transfer office contacts extracted from institution websites to prioritize outreach after an `ACQUIRE_NOW` verdict |
| [Website Contact Scraper](https://apify.com/ryanclinton/website-contact-scraper) | Extract emails and phone numbers from the institution's technology transfer office or department pages after the report identifies a target lab |
| [Waterfall Contact Enrichment](https://apify.com/ryanclinton/waterfall-contact-enrichment) | Enrich the principal investigator names from the `topResearchers` array with email addresses, LinkedIn profiles, and phone numbers |
| [Bulk Email Verifier](https://apify.com/ryanclinton/bulk-email-verifier) | Verify researcher and TTO contact emails gathered from the institution before sending outreach |
| [HubSpot Lead Pusher](https://apify.com/ryanclinton/hubspot-lead-pusher) | Push high-scoring institutions (ACQUIRE\_NOW, PARTNER verdicts) directly into your CRM as deal or company records |
| [SEC EDGAR Filing Analyzer](https://apify.com/ryanclinton/sec-edgar-filing-analyzer) | Cross-reference public companies that cite university IP in their 10-K filings to find existing licensees and competitive intelligence |

### Limitations

- **Data availability varies by institution.** Lesser-known institutions or those in non-English-speaking countries may return sparse results from ORCID and arXiv, which have lower international coverage relative to PubMed or Scopus. Scores for institutions with fewer than 5 records per source should be treated as directional rather than definitive.
- **No real-time data.** All sources are indexed databases; data freshness varies by source. arXiv preprints typically appear within days of submission; USPTO and EPO patent data can lag by 12–18 months from filing.
- **Patent counts are query-limited.** The actor searches by institution name. Patents assigned to spinout companies, individual inventors, or holding entities associated with the institution will not be captured. USPTO and EPO patent assignee records are inconsistently normalized.
- **ORCID profiles are opt-in.** Researchers who have not registered an ORCID profile are invisible to the actor. Institutions with low ORCID adoption rates (common in some engineering and applied science departments) will produce lower lab intelligence scores than their actual team size warrants.
- **TRL classification relies on title keywords.** The TRL scoring scans titles only, not full-text abstracts or claims. A paper titled "A Novel Theoretical Framework..." is classified as low TRL even if the body describes prototype work. This produces occasional misclassification on unconventionally-titled papers.
- **Grants.gov reflects open federal opportunities, not historical awards.** NIH Reporter covers historical NIH awards. Grants.gov captures active and recent federal opportunities. The two sources are complementary but do not provide a complete picture of all federal funding an institution has received.
- **Run time can exceed 90 seconds on large institutions.** Major research universities (MIT, Stanford, Johns Hopkins) may return large datasets from multiple sources. If the run times out, reduce scope with a specific `field` and `department` to limit result volume.
- **The composite score is a screening tool, not a definitive assessment.** It is designed to rank-order institutions for human follow-up, not replace legal, financial, or technical due diligence. All `ACQUIRE_NOW` verdicts should be validated with direct engagement with the institution's technology transfer office.

### Integrations

- [Zapier](https://apify.com/integrations/zapier) — trigger a university research report automatically when a new institution is added to a watchlist spreadsheet, then route high-scoring reports to a Slack channel or CRM
- [Make](https://apify.com/integrations/make) — build multi-step workflows that run a report, filter by verdict, and create HubSpot deal records only for `ACQUIRE_NOW` and `PARTNER` results
- [Google Sheets](https://apify.com/integrations/google-sheets) — export composite scores, verdicts, and signal lists for a portfolio of institutions into a single spreadsheet for side-by-side comparison
- [Apify API](https://docs.apify.com/api/v2) — integrate university research reports into internal deal-flow platforms, R\&D portals, or competitive intelligence dashboards via REST API
- [Webhooks](https://docs.apify.com/platform/integrations/webhooks) — fire a webhook on run completion to trigger downstream enrichment pipelines (e.g., contact scraping of the TTO website, CRM record creation)
- [LangChain / LlamaIndex](https://docs.apify.com/platform/integrations) — pipe the `allSignals` and `recommendations` arrays into a RAG pipeline or AI agent that synthesizes a natural-language investment memo or partnership brief

### Troubleshooting

- **All data source counts are 0 or very low.** The institution name may not match how it appears in the queried databases. Try the full legal name ("Massachusetts Institute of Technology" vs. "MIT"), a common abbreviation, or the name in the institution's primary language. Also check that the `field` value is not so narrow that it returns no results — try removing it for a first run.
- **Composite score seems unexpectedly low for a well-known institution.** Large, broad institutions return thousands of records across many research fields. Without a `field` parameter, the scoring models receive a diverse mix that may dilute patent conversion ratios and TRL signals. Add a specific `field` to focus the query on the relevant technology area.
- **Run times out before completing.** Very large institutions with a broad (no `field`) query can produce large datasets that approach the 120-second per-sub-actor timeout. Add `field` and `department` to reduce data volume, or re-run; transient network delays occasionally cause individual sub-actor calls to time out and return empty arrays.
- **`verdict` is `MONITOR` despite high individual scores.** Check whether the tech maturity score is high (≥ 60) but the lab intelligence score is low (< 30). The override rule explicitly downgrades to `MONITOR` in this case, because mature technology in a weak lab is a higher-risk engagement target. Review the `labIntelligence.signals` for context on what drove the low lab score.
- **`topResearchers` is empty.** ORCID coverage is lower for some institutions and fields. This does not invalidate the report — the lab intelligence score will be lower, but the other three models are unaffected. Check `dataSources.researchers` to confirm the ORCID query returned no results.

### Responsible use

- This actor only accesses publicly available academic and government databases (OpenAlex, arXiv, USPTO, EPO, NIH Reporter, Grants.gov, ORCID).
- All data sources are open-access or explicitly licensed for public use.
- Researcher profile data from ORCID is used in aggregate for scoring purposes only; do not use individual researcher contact details for unsolicited commercial outreach without establishing a legitimate basis.
- Comply with GDPR and applicable data protection laws when storing or processing researcher data returned in the `topResearchers` array.
- For guidance on web scraping and data use legality, see [Apify's guide](https://blog.apify.com/is-web-scraping-legal/).

### FAQ

**How does University Research Report decide whether to give an ACQUIRE\_NOW verdict?**
The composite score must reach 75 or above — calculated as commercialization readiness (30%) + tech maturity (25%) + lab intelligence (25%) + hotspot detection (20%). An override also elevates any institution with a `WORLD_CLASS` lab intelligence score and a commercialization score of 60 or higher directly to `ACQUIRE_NOW`, regardless of the composite total.

**How many data sources does the university research report query?**
Eight: OpenAlex (two separate queries — publications and research entities), arXiv, USPTO, EPO, NIH Reporter, Grants.gov, and ORCID. All are called in parallel, so the total run time is bounded by the slowest single source rather than the sum of all eight.

**How long does a typical university research report run take?**
Most runs complete in 60–90 seconds. Large institutions without a `field` scope may take up to 120 seconds. Narrow queries with both `field` and `department` typically finish in under 60 seconds.

**Can I compare multiple universities side by side?**
Yes. Run the actor for each institution with the same `field` value to ensure all queries are scoped identically, then download results from each run and sort by `compositeScore`. For automated comparison pipelines, use the API with a loop across institution names.

**What is the publication-to-patent conversion ratio and why does it matter?**
It is calculated as `patents ÷ publications` across the queried sources. A ratio above 30% signals that a significant fraction of the lab's research output is being protected as intellectual property — a strong indicator of commercialization intent. A ratio below 5% on a high-publication institution typically means the research stays academic and is not being pushed toward market applications.

**How accurate is the TRL estimate?**
The TRL estimate (1–9) is derived from keyword classification of patent, publication, and grant titles using three keyword lists. It is a directional estimate appropriate for screening and prioritization, not a formal TRL assessment. For formal TRL evaluation, the estimate should be validated by a domain expert reviewing the actual patents and publications.

**Can I use this actor for non-US institutions?**
Yes. EPO patent data is global. OpenAlex and arXiv have strong international coverage. NIH Reporter and Grants.gov are US-specific, so non-US institutions will typically return lower grant scores — this reflects genuine differences in publicly accessible funding data, not necessarily lower actual research funding.

**Is it legal to scrape and use publicly available academic research data?**
All data sources queried by this actor — OpenAlex, arXiv, USPTO, EPO, NIH Reporter, Grants.gov, and ORCID — are explicitly open-access, public-domain government databases, or databases with public APIs designed for programmatic access. Using this data for commercial intelligence and research purposes is consistent with their terms of service. See [Apify's web scraping legal guide](https://blog.apify.com/is-web-scraping-legal/) for broader context.

**How is this different from Dimensions or SciVal?**
Dimensions and SciVal are comprehensive bibliometric platforms starting at $1,500–$2,000/year and require institutional or enterprise licensing. University Research Report focuses specifically on the commercialization intelligence workflow — patent conversion, TRL assessment, tech transfer signals — rather than broad bibliometric analysis. It costs approximately $0.10 per run, requires no subscription, and outputs a structured machine-readable verdict suitable for programmatic deal-flow pipelines.

**What happens if one of the 8 data sources returns an error?**
Each sub-actor call is wrapped in a try-catch block. If a source errors or times out, it returns an empty array and the scoring models proceed with the data from the remaining sources. The `dataSources` count fields in the output will show 0 for the affected source, making it transparent which sources contributed to the score.

**Can I schedule this actor to monitor a set of institutions over time?**
Yes. Use Apify's built-in scheduler to run the actor weekly or monthly against a fixed institution and field. Over time, rising composite scores, accelerating preprint velocity, or new SBIR grants are early signals of increasing commercialization momentum. You can combine scheduling with webhooks to alert your team automatically when a monitored institution's score crosses a threshold.

**Does this actor access university technology transfer office listings or licensing databases?**
No. It uses publicly accessible academic databases only. TTO licensing databases (e.g., university TTO portals, AUTM databases) are not queried. For current licensing availability, use the actor to identify target institutions and then contact their technology transfer office directly.

### Help us improve

If you encounter issues, you can help us debug faster by enabling run sharing in your Apify account:

1. Go to [Account Settings > Privacy](https://console.apify.com/account/privacy)
2. Enable **Share runs with public Actor creators**

This lets us see your run details when something goes wrong, so we can fix issues faster. Your data is only visible to the actor developer, not publicly.

### Support

Found a bug or have a feature request? Open an issue in the [Issues tab](https://console.apify.com/actors/university-research-report/issues) on this actor's page. For custom solutions or enterprise integrations, reach out through the Apify platform.

# Actor input Schema

## `institution` (type: `string`):

University or research institution name (e.g., "MIT", "Stanford University", "Max Planck Institute")

## `field` (type: `string`):

Optional research field or technology area to focus the report (e.g., "quantum computing", "CRISPR", "battery technology")

## `department` (type: `string`):

Optional department or lab name to narrow the scope (e.g., "Computer Science", "Biomedical Engineering")

## Actor input object example

```json
{
  "institution": "MIT"
}
```

# Actor output Schema

## `results` (type: `string`):

No description

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "institution": "MIT"
};

// Run the Actor and wait for it to finish
const run = await client.actor("ryanclinton/university-research-report").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = { "institution": "MIT" }

# Run the Actor and wait for it to finish
run = client.actor("ryanclinton/university-research-report").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "institution": "MIT"
}' |
apify call ryanclinton/university-research-report --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=ryanclinton/university-research-report",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "University Research Report",
        "description": "Generate a comprehensive university research intelligence report by querying 8 academic data sources in parallel.",
        "version": "1.0",
        "x-build-id": "RpiUCtUkQQ0FlRhfQ"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/ryanclinton~university-research-report/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-ryanclinton-university-research-report",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/ryanclinton~university-research-report/runs": {
            "post": {
                "operationId": "runs-sync-ryanclinton-university-research-report",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/ryanclinton~university-research-report/run-sync": {
            "post": {
                "operationId": "run-sync-ryanclinton-university-research-report",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "required": [
                    "institution"
                ],
                "properties": {
                    "institution": {
                        "title": "Institution",
                        "type": "string",
                        "description": "University or research institution name (e.g., \"MIT\", \"Stanford University\", \"Max Planck Institute\")",
                        "default": "MIT"
                    },
                    "field": {
                        "title": "Research Field",
                        "type": "string",
                        "description": "Optional research field or technology area to focus the report (e.g., \"quantum computing\", \"CRISPR\", \"battery technology\")"
                    },
                    "department": {
                        "title": "Department",
                        "type": "string",
                        "description": "Optional department or lab name to narrow the scope (e.g., \"Computer Science\", \"Biomedical Engineering\")"
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
