Website Content Crawler avatar

Website Content Crawler

Try for free

No credit card required

Go to Store
Website Content Crawler

Website Content Crawler

apify/website-content-crawler
Try for free

No credit card required

Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗 LangChain, LlamaIndex, and the wider LLM ecosystem.

Do you want to learn more about this Actor?

Get a demo
GL

Correct Json body (API)

Closed

glovebubble opened this issue
5 months ago

Could you please provide the correct JSON body payload to execute an actor? It should include parameters for adjusting memory allocation, limiting the number of pages to the first page only, saving images, saving the HTML file, and incorporating all other essential inputs.

janbuchar avatar

Hello and thank you for your interest in the Actor! Website content crawler configuration is pretty complex and it can cater to many different usecases. Without knowing yours, I cannot just give you a "correct JSON body payload". Could you show us what you have and describe the behavior that you'd like to change?

GL

glovebubble

5 months ago

I appreciate your guidance. To clarify, I'm looking to run an actor via API to crawl websites with the following capabilities:

Ability to adjust memory allocation Control over crawl depth, specifically limiting it to the first page only Option to save images Option to save the HTML file Flexibility to modify other essential parameters as needed For the output, I'm aiming to retrieve:

Images URLs Text content Description HTML file

Could you provide a sample JSON body payload that includes these parameters and output requirements? I understand that website content crawling can be complex, but I'm looking for a starting point that I can then customize based on my specific use case. Any additional guidance on how to structure the payload or which parameters are most crucial would be greatly appreciated.

janbuchar avatar

This is the default input configuration:

1{
2    "aggressivePrune": false,
3    "clickElementsCssSelector": "[aria-expanded=\"false\"]",
4    "clientSideMinChangePercentage": 15,
5    "crawlerType": "cheerio",
6    "debugLog": false,
7    "debugMode": false,
8    "dynamicContentWaitSecs": 10,
9    "expandIframes": true,
10    "htmlTransformer": "readableText",
11    "ignoreCanonicalUrl": false,
12    "initialConcurrency": 0,
13    "keepUrlFragments": false,
14    "maxConcurrency": 200,
15    "maxCrawlDepth": 1,
16    "maxCrawlPages": 9999999,
17    "maxRequestRetries": 5,
18    "maxResults": 9999999,
19    "maxScrollHeightPixels": 5000,
20    "maxSessionRotations": 10,
21    "minFileDownloadSpeedKBps": 128,
22    "proxyConfiguration": {
23        "useApifyProxy": true,
24        "apifyProxyGroups": []
25    },
26    "readableTextCharThreshold": 100,
27    "removeCookieWarnings": true,
28    "removeElementsCssSelector": "nav, footer, script, style, noscript, svg,\n[role=\"alert\"],\n[role=\"banner\"],\n[role=\"dialog\"],\n[role=\"alertdialog\"],\n[role=\"region\"][aria-label*=\"skip\" i],\n[aria-modal=\"true\"]",
29    "renderingTypeDetectionPercentage": 10,
30    "requestTimeoutSecs": 60,
31    "saveFiles": true,
32    "saveHtml": true,
33    "saveHtmlAsFile": true,
34    "saveMarkdown": true,
35    "saveScreenshots": false,
36    "startUrls": [
37        {
38            "url": "http://example.org/%20%de%20"
39        }
40    ],
41    "useSitemaps": false
42}

I adjusted maxCrawlDepth, saveFiles, saveHtml and `saveHtm... [trimmed]

GL

glovebubble

5 months ago

Thank you for your helpful response.

Developer
Maintained by Apify

Actor Metrics

  • 4k monthly users

  • 839 stars

  • >99% runs succeeded

  • 1 days response time

  • Created in Mar 2023

  • Modified 17 hours ago