PDF To JSON Parser avatar

PDF To JSON Parser

Pricing

Pay per event

Go to Apify Store
PDF To JSON Parser

PDF To JSON Parser

Convert PDF documents into structured JSON using AI-powered OCR and smart data extraction. The Actor processes every page to ensure complete coverage, then identifies text, fields, tables, and key details, delivering clean, organized JSON ready for automation or analysis.

Pricing

Pay per event

Rating

5.0

(1)

Developer

ParseForge

ParseForge

Maintained by Community

Actor stats

1

Bookmarked

44

Total users

4

Monthly active users

0.32 hours

Issues response

a day ago

Last modified

Share

ParseForge Banner

📄 PDF to JSON Parser

🚀 Convert PDFs into structured JSON in seconds. Upload any PDF and get clean, queryable fields. Optional field selection and custom prompts. No coding, no manual data entry.

🕒 Last updated: 2026-05-08 · 📊 Per-page parsing · 🧠 AI-driven extraction · 🚫 No auth required

Convert PDF documents into clean, structured JSON without writing custom parsers per document type. Upload one or more PDFs, optionally tell the actor which fields to extract, and the AI processes every page and returns one record per document with the extracted fields plus full page text. Built for invoice automation, contract review, research-paper indexing, regulatory filings, and any workflow that turns scanned or born-digital PDFs into queryable data.

The output is a structured record per file: a back-reference to the source PDF, the document name, the number of pages, a topic summary, a timestamp, and the extracted fields under fetchedData. Hand the dataset off to your database, BI tool, or AI pipeline. Every run is processed live with no caching of input PDFs.

👥 Built for🎯 Primary use cases
Finance and AP teamsAuto-extract invoice fields into accounting systems
Legal and contract opsPull key terms, dates, parties from contracts
Research and academiaIndex research papers for full-text search
Compliance and regulatoryConvert filings into queryable records
HR and recruitingParse resumes into structured candidate profiles
Data and engineering teamsReplace bespoke PDF parsers across products

📋 What the PDF to JSON Parser does

  • 📄 Multi-PDF input. Upload one or more PDFs via file upload or URL.
  • 🧠 Smart extraction. Optionally specify the exact fields you want, or let the AI pick the important ones.
  • ✏️ Custom prompts. Pass a system prompt to bias extraction toward your domain (legal, medical, financial, etc.).
  • 📊 Page-aware. All pages of every PDF are processed before parsing, so nothing is lost.
  • 🆔 Back-reference. Every record links back to the original PDF in the dataset.
  • ⏱️ Timestamp. Every record carries a timestamp so you can rebuild a timeline.

The actor processes uploads in the order you provide them. Records stream into the dataset as parsing completes, so you can start consuming results before the run is fully finished. Ideal for workflows that need clean structured data from inconsistent PDF layouts.

💡 Why it matters: PDFs are the universal data format that nobody wants to parse. Bespoke parsers break with every layout change. AI-driven extraction adapts to layout variation without code changes, so finance, legal, and research teams can get from "PDF inbox" to "structured database" in minutes.


🎬 Full Demo

🚧 Coming soon: a 3-minute walkthrough showing PDF upload, custom field extraction, and how to feed the output into Google Sheets via Apify integrations.


⚙️ Input

FieldTypeNameDescription
pdfFilearray of stringsPDF FileRequired. One or more PDF file URLs (uploaded via file upload or pre-existing URLs).
fieldsToExtractstringFields to ExtractOptional. Comma-separated list of fields (e.g. title, author, date, total, vendor). Empty = auto-detect.
systemPromptstringSystem PromptOptional custom prompt to bias the extraction toward your domain. Empty = smart default.
maxItemsintegerMax ItemsFree users: limited to 10 items (preview). Paid users: optional, max 1,000,000.

Example 1. Extract specific fields from invoices.

{
"pdfFile": [
"https://example.com/invoices/INV-1001.pdf",
"https://example.com/invoices/INV-1002.pdf"
],
"fieldsToExtract": "vendor, invoiceNumber, date, dueDate, lineItems, total, currency"
}

Example 2. Domain-specific extraction with custom prompt (legal contracts).

{
"pdfFile": [
"https://example.com/contracts/MSA-2026.pdf"
],
"fieldsToExtract": "parties, effectiveDate, termLength, autoRenewal, governingLaw, terminationClauses",
"systemPrompt": "You are a contract analyst. Extract the requested fields verbatim from the agreement, preserving dates and numerical values exactly."
}

⚠️ Good to Know: when fieldsToExtract is set, the AI prioritizes those fields. When it is empty, the AI infers what is meaningful from the PDF and returns whatever it finds.


📊 Output

The dataset returns one structured record per PDF. Each record carries the document name, page count, topic, timestamp, and a fetchedData object with the extracted fields. Consume the dataset as JSON, CSV, Excel, XML, or RSS via the Apify console or API.

🧾 Schema

FieldTypeExample
📄 documentNamestringINV-1001.pdf
📊 numberOfPagesnumber2
🏷️ topicstringVendor invoice
📅 timestampISO datetime2026-05-08T12:00:00.000Z
📦 fetchedDataobject{ "vendor": "Acme Corp", "invoiceNumber": "INV-1001", ... }
🔗 sourceUrlstring (url)https://example.com/invoices/INV-1001.pdf
errorstring or nullnull

📦 Sample records

1. Typical record (invoice with custom fields)

{
"documentName": "INV-1001.pdf",
"numberOfPages": 2,
"topic": "Vendor invoice",
"timestamp": "2026-05-08T12:00:00.000Z",
"fetchedData": {
"vendor": "Acme Corp",
"invoiceNumber": "INV-1001",
"date": "2026-04-30",
"dueDate": "2026-05-30",
"lineItems": [
{"description": "Cloud services Q2", "amount": 1200},
{"description": "Support add-on", "amount": 300}
],
"total": 1500,
"currency": "USD"
},
"sourceUrl": "https://example.com/invoices/INV-1001.pdf",
"error": null
}

2. Auto-detected fields (no fieldsToExtract specified)

{
"documentName": "research-paper.pdf",
"numberOfPages": 18,
"topic": "Research paper",
"timestamp": "2026-05-08T12:00:00.000Z",
"fetchedData": {
"title": "Diffusion-based generative models for tabular data",
"authors": ["Jane Doe", "Carlos Lee"],
"abstract": "We present a diffusion-based approach...",
"keywords": ["diffusion", "tabular", "generative"],
"publicationYear": 2026,
"doi": "10.1234/abcd.5678"
},
"sourceUrl": "https://example.com/papers/diffusion-2026.pdf",
"error": null
}

3. Failed parse (corrupt PDF)

{
"documentName": "broken-file.pdf",
"numberOfPages": null,
"topic": null,
"timestamp": "2026-05-08T12:00:00.000Z",
"fetchedData": null,
"sourceUrl": "https://example.com/broken-file.pdf",
"error": "Could not parse PDF: file is encrypted"
}

✨ Why choose this Actor

Capability
🎯Built for the job. Single-purpose PDF-to-JSON pipeline with sensible defaults.
🧠AI-driven extraction. Adapts to layout variation without code changes.
⚙️Configurable. Specify fields or pass a custom prompt for domain-specific extraction.
🔁Live processing. Every run runs end to end with no caching of input PDFs.
🌐No infra to manage. Apify handles compute, scaling, scheduling, and storage.
🛡️Reliable. Per-file error reporting means one bad PDF does not kill the whole run.
🚫No code required. Configure in the UI, run from CLI, schedule via cron, or call from any language with the Apify SDK.

📊 Production-grade PDF parsing without writing or maintaining custom parsers per document type.


📈 How it compares to alternatives

ApproachCostCoverageRefreshAccuracySetup
⭐ PDF to JSON Parser (this Actor)$5 free credit, then pay-per-useAny PDFLive per runHigh, layout-agnostic⚡ 2 min
Hand-written parsersEngineering hoursPer layoutWhenever you maintain itHigh but brittle🐢 Days to weeks
OCR-only tools$$ monthlyText extraction onlyLiveMedium⏳ Hours
Manual data entryHours per fileLimitedStaleVariable🕒 Variable

Pick this Actor when you want flexible, layout-agnostic PDF parsing without owning the infrastructure.


🚀 How to use

  1. 📝 Sign up. Create a free account with $5 credit (takes 2 minutes).
  2. 🌐 Open the Actor. Go to the PDF to JSON Parser page on the Apify Store.
  3. 🎯 Upload your PDFs. Drop one or more PDFs and (optionally) list the fields you need.
  4. 🚀 Run it. Click Start and let the Actor extract structured data.
  5. 📥 Download. Grab your results in the Dataset tab as CSV, Excel, JSON, or XML.

⏱️ Total time from signup to first parsed PDF: 3-5 minutes for a short document.


💼 Business use cases

📊 Finance and AP automation

  • Auto-extract invoice data into accounting systems
  • Parse expense reports for reimbursement workflows
  • Pull line items from vendor PDFs for analysis
  • Build searchable archives of financial documents
  • Extract parties, dates, and key clauses from contracts
  • Build searchable contract repositories
  • Surface auto-renewal triggers and termination dates
  • Power contract intelligence and review workflows

🎯 Research and compliance

  • Index research papers for full-text search
  • Convert regulatory filings into queryable records
  • Build literature databases for systematic review
  • Power KYC and due-diligence workflows from filings

🛠️ Engineering and product

  • Replace bespoke PDF parsers across products
  • Add document intelligence to SaaS tools
  • Wire datasets into your apps via the Apify API or webhooks
  • Skip the layout-handling and OCR maintenance entirely

🌟 Beyond business use cases

Data like this powers more than commercial workflows. The same structured records support research, education, civic projects, and personal initiatives.

🎓 Research and academia

  • Empirical datasets for papers, thesis work, and coursework
  • Longitudinal studies tracking changes across snapshots
  • Reproducible research with cited, versioned data pulls
  • Classroom exercises on data analysis and ethical scraping

🎨 Personal and creative

  • Side projects, portfolio demos, and indie app launches
  • Data visualizations, dashboards, and infographics
  • Content research for bloggers, YouTubers, and podcasters
  • Hobbyist collections and personal trackers

🤝 Non-profit and civic

  • Transparency reporting and accountability projects
  • Advocacy campaigns backed by public-interest data
  • Community-run databases for local issues
  • Investigative journalism on public records

🧪 Experimentation

  • Prototype AI and machine-learning pipelines with real data
  • Validate product-market hypotheses before engineering spend
  • Train small domain-specific models on niche corpora
  • Test dashboard concepts with live input

🔌 Automating PDF to JSON Parser

This Actor exposes a REST endpoint, so you can drive it from any language or workflow tool.

Schedules. Use Apify Scheduler to process a folder of PDFs on a cron cadence. Combine with webhooks to trigger downstream workflows the moment parsing completes.


❓ Frequently Asked Questions

💳 Do I need a paid Apify plan to run this actor?

No. You can start right now on the free Apify plan, which includes $5 in monthly credit. That is enough to run the actor several times and explore the output. Paid plans unlock higher item caps, more concurrent runs, and larger datasets. Create a free Apify account here.

🚨 What happens if my run fails or returns no results?

Failed runs are not charged. If a single PDF fails (corrupt, encrypted, unreadable URL), the actor records the error on that record only and continues with the rest of the batch. If the whole run fails, re-run it or open our contact form.

📏 How large can my PDFs be?

There is no hard cap, but processing time and cost scale with page count. We recommend splitting documents over 100 pages into chunks for faster results and easier downstream review.

🧠 How does extraction work?

The actor sends the PDF content to an AI extraction service together with your field list (or a smart default prompt). The AI returns structured JSON which is then validated and pushed to the dataset.

🌐 What languages are supported?

Most major languages are supported, including English, Spanish, French, German, Portuguese, Italian, Japanese, and Chinese. The AI auto-detects the document language; you can also bias it via the system prompt.

🧑‍💻 Can I call this actor from my own code?

Yes. Apify exposes every actor as a REST endpoint and ships first-class SDKs for Node.js and Python. You can start a run, read the dataset, and handle webhooks from your own app in a few lines.

📤 How do I export the data?

Every Apify dataset can be downloaded in one click as CSV, JSON, JSONL, Excel, HTML, XML, or RSS. You can also pull results programmatically via the Apify API or stream into BigQuery, S3, and other destinations through built-in integrations.

📅 Can I schedule the actor to run automatically?

Yes. Use the Apify scheduler to run the actor on any cadence, from hourly to monthly. Drop new PDF URLs into the input each cycle, or wire the actor to fire on a webhook from your inbox or storage system.

🏪 Can I use the data commercially?

Yes. PDFs you have rights to are yours to parse and use in your own internal pipelines, products, and reports.

💼 Which plan should I pick for production use?

Apify's Starter and Scale plans are designed for production workloads. They give you faster instances, more concurrent runs, and higher quotas. Pick the plan that matches your document volume and refresh cadence; the in-app pricing calculator will help you size it.

🛠️ Can you add tabular extraction or OCR for scanned PDFs?

Open the contact form and tell us about your use case. We add features regularly when there is a clear use case behind the request.

Yes, provided you have rights to the PDFs. You are responsible for compliance with copyright, privacy, and licensing laws applicable to the documents you submit.


🔌 Integrate with any app

PDF to JSON Parser connects to any cloud service via Apify integrations:

  • Make - Automate multi-step workflows
  • Zapier - Connect with 5,000+ apps
  • Slack - Get run notifications in your channels
  • Airbyte - Pipe results into your warehouse
  • GitHub - Trigger runs from commits and releases
  • Google Drive - Export datasets straight to Sheets

You can also use webhooks to trigger downstream actions when a parse completes, like firing a summarization actor or pinging a Slack channel.


💡 Pro Tip: browse the complete ParseForge collection for more reference-data scrapers.


🆘 Need Help? Open our contact form to request a new actor, propose a custom project, or report an issue.


⚠️ Disclaimer. This Actor is an independent tool. The actor processes only PDFs you supply by URL and is intended for legitimate document automation workflows. Users are responsible for ensuring they hold the rights to parse the PDFs they submit and for compliance with copyright, privacy, and licensing laws in their jurisdiction.