Databento Details Spider avatar

Databento Details Spider

Pricing

from $9.00 / 1,000 results

Go to Apify Store
Databento Details Spider

Databento Details Spider

The Databento Details Spider is an Apify Actor that scrapes comprehensive financial instrument data from Databento catalog pages, including expirations, volumes, open interest, and detailed sample schemas for MBO, MBP, and OHLCV....

Pricing

from $9.00 / 1,000 results

Rating

0.0

(0)

Developer

GetDataForMe

GetDataForMe

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

1

Monthly active users

5 days ago

Last modified

Share

Description

The Databento Details Spider is an Apify Actor that scrapes comprehensive financial instrument data from Databento catalog pages, including expirations, volumes, open interest, and detailed sample schemas for MBO, MBP, and OHLCV....


Databento Details Spider

Introduction

The Databento Details Spider is an Apify Actor designed to scrape comprehensive details from Databento catalog pages, extracting structured data on financial instruments such as futures. It provides detailed instrument information, including expiration dates, volumes, and open interest, alongside sample data schemas with full field descriptions for various data types like market by order (MBO), market by price (MBP), and OHLCV bars. This tool is invaluable for financial analysts, traders, and researchers needing reliable, automated data extraction from Databento's platform.

Features

  • Comprehensive Data Extraction: Scrapes instrument details, including symbols, expirations, prices, volumes, and open interest from Databento URLs.
  • Schema-Rich Sample Data: Provides detailed field descriptions for multiple schemas (e.g., MBO, MBP-1, OHLCV-1s), enabling deep understanding of data structures.
  • Structured JSON Output: Delivers clean, machine-readable results with metadata like dataset, symbol, and asset class.
  • Flexible Input: Accepts multiple URLs for batch processing, supporting various Databento catalog pages.
  • High Reliability: Handles web scraping with robust error handling to ensure consistent data retrieval.
  • Performance Optimized: Efficiently processes pages to minimize runtime and resource usage.
  • No Coding Required: User-friendly interface for quick setup and execution on the Apify platform.

Input Parameters

ParameterTypeRequiredDescriptionExample
UrlsarrayNoAn array of URLs to Databento catalog pages for scraping. Each URL must be a valid HTTP/HTTPS link.["https://databento.com/catalog/cme/GLBX.MDP3/futures/ZYSE"]

Example Usage

Input

{
"Urls": [
"https://databento.com/catalog/cme/GLBX.MDP3/futures/ZYSE"
]
}

Output

[
{
"input_url": "https://databento.com/catalog/cme/GLBX.MDP3/futures/ZYSE",
"dataset": "GLBX.MDP3",
"symbol": "ZYSE",
"asset_class": "Futures",
"instrument_data": [
{
"symbol": "ZYSEZ6",
"expiration": "2026-12-14",
"last_price": null,
"volume": 0,
"open_interest": 0
}
],
"sample_data": [
{
"schema_id": "mbo",
"name": "MBO",
"description": "Full order book data, including all buy and sell orders at every price level.",
"fields": [
{
"name": "ts_recv",
"description": "The capture-server-received timestamp expressed as the number of nanoseconds since the UNIX epoch."
}
]
}
],
"actor_id": "CvRq3ozTK0nG5BoA6",
"run_id": "tmbmDZdh520D7zfx3"
}
]

Use Cases

  • Market Research: Gather detailed instrument data for analyzing futures markets and trends.
  • Competitive Intelligence: Monitor competitor positions and trading volumes from Databento catalogs.
  • Price Monitoring: Track expiration dates, last prices, and open interest for informed trading decisions.
  • Content Aggregation: Collect structured data for financial databases or reporting tools.
  • Academic Research: Extract sample schemas and field details for studies on market data structures.
  • Business Automation: Automate data pipelines for real-time financial insights and alerts.

Installation and Usage

  1. Search for "Databento Details Spider" in the Apify Store.
  2. Click "Try for free" or "Run".
  3. Configure input parameters by adding URLs to scrape.
  4. Click "Start" to begin extraction.
  5. Monitor progress in the log.
  6. Export results in your preferred format (JSON, CSV, Excel).

Output Format

The output is a JSON array of objects, each corresponding to a scraped URL. Key fields include:

  • input_url: The original URL scraped.
  • dataset, symbol, asset_class: Metadata about the instrument group.
  • instrument_data: An array of objects with details like symbol, expiration, last_price, volume, and open_interest.
  • sample_data: An array of schema objects, each with schema_id, name, description, and a fields array containing field names and descriptions.
  • actor_id and run_id: Apify-specific identifiers for tracking.

This structured format ensures easy parsing and integration into downstream applications.

Support

For custom/simplified outputs or bug reports, please contact:

We're here to help you get the most out of this Actor!