PDF To JSON Parser
Pricing
Pay per event
PDF To JSON Parser
Convert PDF documents into structured JSON using AI-powered OCR and smart data extraction. The Actor processes every page to ensure complete coverage, then identifies text, fields, tables, and key details, delivering clean, organized JSON ready for automation or analysis.
Pricing
Pay per event
Rating
5.0
(1)
Developer
ParseForge
Actor stats
1
Bookmarked
44
Total users
4
Monthly active users
0.32 hours
Issues response
a day ago
Last modified
Categories
Share

📄 PDF to JSON Parser
🚀 Convert PDFs into structured JSON in seconds. Upload any PDF and get clean, queryable fields. Optional field selection and custom prompts. No coding, no manual data entry.
🕒 Last updated: 2026-05-08 · 📊 Per-page parsing · 🧠 AI-driven extraction · 🚫 No auth required
Convert PDF documents into clean, structured JSON without writing custom parsers per document type. Upload one or more PDFs, optionally tell the actor which fields to extract, and the AI processes every page and returns one record per document with the extracted fields plus full page text. Built for invoice automation, contract review, research-paper indexing, regulatory filings, and any workflow that turns scanned or born-digital PDFs into queryable data.
The output is a structured record per file: a back-reference to the source PDF, the document name, the number of pages, a topic summary, a timestamp, and the extracted fields under fetchedData. Hand the dataset off to your database, BI tool, or AI pipeline. Every run is processed live with no caching of input PDFs.
| 👥 Built for | 🎯 Primary use cases |
|---|---|
| Finance and AP teams | Auto-extract invoice fields into accounting systems |
| Legal and contract ops | Pull key terms, dates, parties from contracts |
| Research and academia | Index research papers for full-text search |
| Compliance and regulatory | Convert filings into queryable records |
| HR and recruiting | Parse resumes into structured candidate profiles |
| Data and engineering teams | Replace bespoke PDF parsers across products |
📋 What the PDF to JSON Parser does
- 📄 Multi-PDF input. Upload one or more PDFs via file upload or URL.
- 🧠 Smart extraction. Optionally specify the exact fields you want, or let the AI pick the important ones.
- ✏️ Custom prompts. Pass a system prompt to bias extraction toward your domain (legal, medical, financial, etc.).
- 📊 Page-aware. All pages of every PDF are processed before parsing, so nothing is lost.
- 🆔 Back-reference. Every record links back to the original PDF in the dataset.
- ⏱️ Timestamp. Every record carries a
timestampso you can rebuild a timeline.
The actor processes uploads in the order you provide them. Records stream into the dataset as parsing completes, so you can start consuming results before the run is fully finished. Ideal for workflows that need clean structured data from inconsistent PDF layouts.
💡 Why it matters: PDFs are the universal data format that nobody wants to parse. Bespoke parsers break with every layout change. AI-driven extraction adapts to layout variation without code changes, so finance, legal, and research teams can get from "PDF inbox" to "structured database" in minutes.
🎬 Full Demo
🚧 Coming soon: a 3-minute walkthrough showing PDF upload, custom field extraction, and how to feed the output into Google Sheets via Apify integrations.
⚙️ Input
| Field | Type | Name | Description |
|---|---|---|---|
pdfFile | array of strings | PDF File | Required. One or more PDF file URLs (uploaded via file upload or pre-existing URLs). |
fieldsToExtract | string | Fields to Extract | Optional. Comma-separated list of fields (e.g. title, author, date, total, vendor). Empty = auto-detect. |
systemPrompt | string | System Prompt | Optional custom prompt to bias the extraction toward your domain. Empty = smart default. |
maxItems | integer | Max Items | Free users: limited to 10 items (preview). Paid users: optional, max 1,000,000. |
Example 1. Extract specific fields from invoices.
{"pdfFile": ["https://example.com/invoices/INV-1001.pdf","https://example.com/invoices/INV-1002.pdf"],"fieldsToExtract": "vendor, invoiceNumber, date, dueDate, lineItems, total, currency"}
Example 2. Domain-specific extraction with custom prompt (legal contracts).
{"pdfFile": ["https://example.com/contracts/MSA-2026.pdf"],"fieldsToExtract": "parties, effectiveDate, termLength, autoRenewal, governingLaw, terminationClauses","systemPrompt": "You are a contract analyst. Extract the requested fields verbatim from the agreement, preserving dates and numerical values exactly."}
⚠️ Good to Know: when
fieldsToExtractis set, the AI prioritizes those fields. When it is empty, the AI infers what is meaningful from the PDF and returns whatever it finds.
📊 Output
The dataset returns one structured record per PDF. Each record carries the document name, page count, topic, timestamp, and a fetchedData object with the extracted fields. Consume the dataset as JSON, CSV, Excel, XML, or RSS via the Apify console or API.
🧾 Schema
| Field | Type | Example |
|---|---|---|
📄 documentName | string | INV-1001.pdf |
📊 numberOfPages | number | 2 |
🏷️ topic | string | Vendor invoice |
📅 timestamp | ISO datetime | 2026-05-08T12:00:00.000Z |
📦 fetchedData | object | { "vendor": "Acme Corp", "invoiceNumber": "INV-1001", ... } |
🔗 sourceUrl | string (url) | https://example.com/invoices/INV-1001.pdf |
❗ error | string or null | null |
📦 Sample records
1. Typical record (invoice with custom fields)
{"documentName": "INV-1001.pdf","numberOfPages": 2,"topic": "Vendor invoice","timestamp": "2026-05-08T12:00:00.000Z","fetchedData": {"vendor": "Acme Corp","invoiceNumber": "INV-1001","date": "2026-04-30","dueDate": "2026-05-30","lineItems": [{"description": "Cloud services Q2", "amount": 1200},{"description": "Support add-on", "amount": 300}],"total": 1500,"currency": "USD"},"sourceUrl": "https://example.com/invoices/INV-1001.pdf","error": null}
2. Auto-detected fields (no fieldsToExtract specified)
{"documentName": "research-paper.pdf","numberOfPages": 18,"topic": "Research paper","timestamp": "2026-05-08T12:00:00.000Z","fetchedData": {"title": "Diffusion-based generative models for tabular data","authors": ["Jane Doe", "Carlos Lee"],"abstract": "We present a diffusion-based approach...","keywords": ["diffusion", "tabular", "generative"],"publicationYear": 2026,"doi": "10.1234/abcd.5678"},"sourceUrl": "https://example.com/papers/diffusion-2026.pdf","error": null}
3. Failed parse (corrupt PDF)
{"documentName": "broken-file.pdf","numberOfPages": null,"topic": null,"timestamp": "2026-05-08T12:00:00.000Z","fetchedData": null,"sourceUrl": "https://example.com/broken-file.pdf","error": "Could not parse PDF: file is encrypted"}
✨ Why choose this Actor
| Capability | |
|---|---|
| 🎯 | Built for the job. Single-purpose PDF-to-JSON pipeline with sensible defaults. |
| 🧠 | AI-driven extraction. Adapts to layout variation without code changes. |
| ⚙️ | Configurable. Specify fields or pass a custom prompt for domain-specific extraction. |
| 🔁 | Live processing. Every run runs end to end with no caching of input PDFs. |
| 🌐 | No infra to manage. Apify handles compute, scaling, scheduling, and storage. |
| 🛡️ | Reliable. Per-file error reporting means one bad PDF does not kill the whole run. |
| 🚫 | No code required. Configure in the UI, run from CLI, schedule via cron, or call from any language with the Apify SDK. |
📊 Production-grade PDF parsing without writing or maintaining custom parsers per document type.
📈 How it compares to alternatives
| Approach | Cost | Coverage | Refresh | Accuracy | Setup |
|---|---|---|---|---|---|
| ⭐ PDF to JSON Parser (this Actor) | $5 free credit, then pay-per-use | Any PDF | Live per run | High, layout-agnostic | ⚡ 2 min |
| Hand-written parsers | Engineering hours | Per layout | Whenever you maintain it | High but brittle | 🐢 Days to weeks |
| OCR-only tools | $$ monthly | Text extraction only | Live | Medium | ⏳ Hours |
| Manual data entry | Hours per file | Limited | Stale | Variable | 🕒 Variable |
Pick this Actor when you want flexible, layout-agnostic PDF parsing without owning the infrastructure.
🚀 How to use
- 📝 Sign up. Create a free account with $5 credit (takes 2 minutes).
- 🌐 Open the Actor. Go to the PDF to JSON Parser page on the Apify Store.
- 🎯 Upload your PDFs. Drop one or more PDFs and (optionally) list the fields you need.
- 🚀 Run it. Click Start and let the Actor extract structured data.
- 📥 Download. Grab your results in the Dataset tab as CSV, Excel, JSON, or XML.
⏱️ Total time from signup to first parsed PDF: 3-5 minutes for a short document.
💼 Business use cases
🌟 Beyond business use cases
Data like this powers more than commercial workflows. The same structured records support research, education, civic projects, and personal initiatives.
🔌 Automating PDF to JSON Parser
This Actor exposes a REST endpoint, so you can drive it from any language or workflow tool.
- Node.js - call it via the Apify JS SDK.
- Python - call it via the Apify Python SDK.
- REST - hit it directly through the Apify v2 API.
Schedules. Use Apify Scheduler to process a folder of PDFs on a cron cadence. Combine with webhooks to trigger downstream workflows the moment parsing completes.
❓ Frequently Asked Questions
💳 Do I need a paid Apify plan to run this actor?
No. You can start right now on the free Apify plan, which includes $5 in monthly credit. That is enough to run the actor several times and explore the output. Paid plans unlock higher item caps, more concurrent runs, and larger datasets. Create a free Apify account here.
🚨 What happens if my run fails or returns no results?
Failed runs are not charged. If a single PDF fails (corrupt, encrypted, unreadable URL), the actor records the error on that record only and continues with the rest of the batch. If the whole run fails, re-run it or open our contact form.
📏 How large can my PDFs be?
There is no hard cap, but processing time and cost scale with page count. We recommend splitting documents over 100 pages into chunks for faster results and easier downstream review.
🧠 How does extraction work?
The actor sends the PDF content to an AI extraction service together with your field list (or a smart default prompt). The AI returns structured JSON which is then validated and pushed to the dataset.
🌐 What languages are supported?
Most major languages are supported, including English, Spanish, French, German, Portuguese, Italian, Japanese, and Chinese. The AI auto-detects the document language; you can also bias it via the system prompt.
🧑💻 Can I call this actor from my own code?
Yes. Apify exposes every actor as a REST endpoint and ships first-class SDKs for Node.js and Python. You can start a run, read the dataset, and handle webhooks from your own app in a few lines.
📤 How do I export the data?
Every Apify dataset can be downloaded in one click as CSV, JSON, JSONL, Excel, HTML, XML, or RSS. You can also pull results programmatically via the Apify API or stream into BigQuery, S3, and other destinations through built-in integrations.
📅 Can I schedule the actor to run automatically?
Yes. Use the Apify scheduler to run the actor on any cadence, from hourly to monthly. Drop new PDF URLs into the input each cycle, or wire the actor to fire on a webhook from your inbox or storage system.
🏪 Can I use the data commercially?
Yes. PDFs you have rights to are yours to parse and use in your own internal pipelines, products, and reports.
💼 Which plan should I pick for production use?
Apify's Starter and Scale plans are designed for production workloads. They give you faster instances, more concurrent runs, and higher quotas. Pick the plan that matches your document volume and refresh cadence; the in-app pricing calculator will help you size it.
🛠️ Can you add tabular extraction or OCR for scanned PDFs?
Open the contact form and tell us about your use case. We add features regularly when there is a clear use case behind the request.
⚖️ Is it legal to parse PDFs with this Actor?
Yes, provided you have rights to the PDFs. You are responsible for compliance with copyright, privacy, and licensing laws applicable to the documents you submit.
🔌 Integrate with any app
PDF to JSON Parser connects to any cloud service via Apify integrations:
- Make - Automate multi-step workflows
- Zapier - Connect with 5,000+ apps
- Slack - Get run notifications in your channels
- Airbyte - Pipe results into your warehouse
- GitHub - Trigger runs from commits and releases
- Google Drive - Export datasets straight to Sheets
You can also use webhooks to trigger downstream actions when a parse completes, like firing a summarization actor or pinging a Slack channel.
🔗 Recommended Actors
- 📰 Article Extractor - Extract clean article text from any URL
- 🎤 Audio Transcriber - Convert audio recordings to structured text
- 📊 HTML to JSON Smart Parser - Parse any HTML page into structured JSON
- 🎬 YouTube AI Transcriber - Transcribe YouTube videos via URL
- 🌐 Website Content Crawler - Crawl entire sites and export structured content
💡 Pro Tip: browse the complete ParseForge collection for more reference-data scrapers.
🆘 Need Help? Open our contact form to request a new actor, propose a custom project, or report an issue.
⚠️ Disclaimer. This Actor is an independent tool. The actor processes only PDFs you supply by URL and is intended for legitimate document automation workflows. Users are responsible for ensuring they hold the rights to parse the PDFs they submit and for compliance with copyright, privacy, and licensing laws in their jurisdiction.