Website Content Crawler avatar
Website Content Crawler

Pricing

Pay per usage

Go to Store
Website Content Crawler

Website Content Crawler

Developed by

Apify

Apify

Maintained by Apify

Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗 LangChain, LlamaIndex, and the wider LLM ecosystem.

3.7 (41)

Pricing

Pay per usage

1531

Total users

59K

Monthly users

7.9K

Runs succeeded

>99%

Issues response

7.6 days

Last modified

5 days ago

GL

Correct Json body (API)

Closed

glovebubble opened this issue
10 months ago

Could you please provide the correct JSON body payload to execute an actor? It should include parameters for adjusting memory allocation, limiting the number of pages to the first page only, saving images, saving the HTML file, and incorporating all other essential inputs.

janbuchar avatar

Hello and thank you for your interest in the Actor! Website content crawler configuration is pretty complex and it can cater to many different usecases. Without knowing yours, I cannot just give you a "correct JSON body payload". Could you show us what you have and describe the behavior that you'd like to change?

GL

glovebubble

10 months ago

I appreciate your guidance. To clarify, I'm looking to run an actor via API to crawl websites with the following capabilities:

Ability to adjust memory allocation Control over crawl depth, specifically limiting it to the first page only Option to save images Option to save the HTML file Flexibility to modify other essential parameters as needed For the output, I'm aiming to retrieve:

Images URLs Text content Description HTML file

Could you provide a sample JSON body payload that includes these parameters and output requirements? I understand that website content crawling can be complex, but I'm looking for a starting point that I can then customize based on my specific use case. Any additional guidance on how to structure the payload or which parameters are most crucial would be greatly appreciated.

janbuchar avatar

This is the default input configuration:

{
"aggressivePrune": false,
"clickElementsCssSelector": "[aria-expanded=\"false\"]",
"clientSideMinChangePercentage": 15,
"crawlerType": "cheerio",
"debugLog": false,
"debugMode": false,
"dynamicContentWaitSecs": 10,
"expandIframes": true,
"htmlTransformer": "readableText",
"ignoreCanonicalUrl": false,
"initialConcurrency": 0,
"keepUrlFragments": false,
"maxConcurrency": 200,
"maxCrawlDepth": 1,
"maxCrawlPages": 9999999,
"maxRequestRetries": 5,
"maxResults": 9999999,
"maxScrollHeightPixels": 5000,
"maxSessionRotations": 10,
"minFileDownloadSpeedKBps": 128,
"proxyConfiguration": {
"useApifyProxy": true,
"apifyProxyGroups": []
},
"readableTextCharThreshold": 100,
"removeCookieWarnings": true,
"removeElementsCssSelector": "nav, footer, script, style, noscript, svg,\n[role=\"alert\"],\n[role=\"banner\"],\n[role=\"dialog\"],\n[role=\"alertdialog\"],\n[role=\"region\"][aria-label*=\"skip\" i],\n[aria-modal=\"true\"]",
"renderingTypeDetectionPercentage": 10,
"requestTimeoutSecs": 60,
"saveFiles": true,
"saveHtml": true,
"saveHtmlAsFile": true,
"saveMarkdown": true,
"saveScreenshots": false,
"startUrls": [
{
"url": "http://example.org/%20%de%20"
}
],
"useSitemaps": false
}

I adjusted maxCrawlDepth, saveFiles, saveHtml and `saveHtm... [trimmed]

GL

glovebubble

10 months ago

Thank you for your helpful response.