
Website Content Crawler
No credit card required

Website Content Crawler
No credit card required
Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗 LangChain, LlamaIndex, and the wider LLM ecosystem.
You can access the Website Content Crawler programmatically from your own applications by using the Apify API. You can choose the language preference from below. To use the Apify API, you’ll need an Apify account and your API token, found in Integrations settings in Apify Console.
1{
2 "openapi": "3.0.1",
3 "info": {
4 "version": "0.3",
5 "x-build-id": "I5qN8P3QPpcwEIleP"
6 },
7 "servers": [
8 {
9 "url": "https://api.apify.com/v2"
10 }
11 ],
12 "paths": {
13 "/acts/apify~website-content-crawler/run-sync-get-dataset-items": {
14 "post": {
15 "operationId": "run-sync-get-dataset-items-apify-website-content-crawler",
16 "x-openai-isConsequential": false,
17 "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
18 "tags": [
19 "Run Actor"
20 ],
21 "requestBody": {
22 "required": true,
23 "content": {
24 "application/json": {
25 "schema": {
26 "$ref": "#/components/schemas/inputSchema"
27 }
28 }
29 }
30 },
31 "parameters": [
32 {
33 "name": "token",
34 "in": "query",
35 "required": true,
36 "schema": {
37 "type": "string"
38 },
39 "description": "Enter your Apify token here"
40 }
41 ],
42 "responses": {
43 "200": {
44 "description": "OK"
45 }
46 }
47 }
48 },
49 "/acts/apify~website-content-crawler/runs": {
50 "post": {
51 "operationId": "runs-sync-apify-website-content-crawler",
52 "x-openai-isConsequential": false,
53 "summary": "Executes an Actor and returns information about the initiated run in response.",
54 "tags": [
55 "Run Actor"
56 ],
57 "requestBody": {
58 "required": true,
59 "content": {
60 "application/json": {
61 "schema": {
62 "$ref": "#/components/schemas/inputSchema"
63 }
64 }
65 }
66 },
67 "parameters": [
68 {
69 "name": "token",
70 "in": "query",
71 "required": true,
72 "schema": {
73 "type": "string"
74 },
75 "description": "Enter your Apify token here"
76 }
77 ],
78 "responses": {
79 "200": {
80 "description": "OK",
81 "content": {
82 "application/json": {
83 "schema": {
84 "$ref": "#/components/schemas/runsResponseSchema"
85 }
86 }
87 }
88 }
89 }
90 }
91 },
92 "/acts/apify~website-content-crawler/run-sync": {
93 "post": {
94 "operationId": "run-sync-apify-website-content-crawler",
95 "x-openai-isConsequential": false,
96 "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
97 "tags": [
98 "Run Actor"
99 ],
100 "requestBody": {
101 "required": true,
102 "content": {
103 "application/json": {
104 "schema": {
105 "$ref": "#/components/schemas/inputSchema"
106 }
107 }
108 }
109 },
110 "parameters": [
111 {
112 "name": "token",
113 "in": "query",
114 "required": true,
115 "schema": {
116 "type": "string"
117 },
118 "description": "Enter your Apify token here"
119 }
120 ],
121 "responses": {
122 "200": {
123 "description": "OK"
124 }
125 }
126 }
127 }
128 },
129 "components": {
130 "schemas": {
131 "inputSchema": {
132 "type": "object",
133 "required": [
134 "startUrls",
135 "proxyConfiguration"
136 ],
137 "properties": {
138 "startUrls": {
139 "title": "Start URLs",
140 "type": "array",
141 "description": "One or more URLs of pages where the crawler will start.\n\nBy default, the Actor will also crawl sub-pages of these URLs. For example, for start URL `https://example.com/blog`, it will crawl also `https://example.com/blog/post` or `https://example.com/blog/article`. The **Include URLs (globs)** option overrides this automation behavior.",
142 "items": {
143 "type": "object",
144 "required": [
145 "url"
146 ],
147 "properties": {
148 "url": {
149 "type": "string",
150 "title": "URL of a web page",
151 "format": "uri"
152 }
153 }
154 }
155 },
156 "useSitemaps": {
157 "title": "Consider URLs from Sitemaps",
158 "type": "boolean",
159 "description": "If enabled, the crawler will look for [Sitemaps](https://en.wikipedia.org/wiki/Sitemaps) at the domains of the provided *Start URLs* and enqueue matching URLs similarly as the links found on crawled pages. You can also reference a `sitemap.xml` file directly by adding it as another Start URL (e.g. `https://www.example.com/sitemap.xml`)\n\nThis feature makes the crawling more robust on websites that support Sitemaps, as it includes pages that might be not reachable from Start URLs. Note that if a page is found via a Sitemap, it will have depth 1.",
160 "default": false
161 },
162 "crawlerType": {
163 "title": "Crawler type",
164 "enum": [
165 "playwright:adaptive",
166 "playwright:firefox",
167 "playwright:chrome",
168 "cheerio",
169 "jsdom"
170 ],
171 "type": "string",
172 "description": "Select the crawling engine:\n- **Headless web browser** - Useful for modern websites with anti-scraping protections and JavaScript rendering. It recognizes common blocking patterns like CAPTCHAs and automatically retries blocked requests through new sessions. However, running web browsers is more expensive as it requires more computing resources and is slower. It is recommended to use at least 8 GB of RAM.\n- **Stealthy web browser** (default) - Another headless web browser with anti-blocking measures enabled. Try this if you encounter bot protection while scraping. For best performance, use with Apify Proxy residential IPs. \n- **Adaptive switching between Chrome and raw HTTP client** - The crawler automatically switches between raw HTTP for static pages and Chrome browser (via Playwright) for dynamic pages, to get the maximum performance wherever possible. \n- **Raw HTTP client** - High-performance crawling mode that uses raw HTTP requests to fetch the pages. It is faster and cheaper, but it might not work on all websites.\n\nBeware that with the raw HTTP client or adaptive crawling mode, some features are not available, e.g. wait for dynamic content, maximum scroll height, or remove cookie warnings.",
173 "default": "playwright:firefox"
174 },
175 "includeUrlGlobs": {
176 "title": "Include URLs (globs)",
177 "type": "array",
178 "description": "Glob patterns matching URLs of pages that will be included in crawling. \n\nSetting this option will disable the default Start URLs based scoping and will allow you to customize the crawling scope yourself. Note that this affects only links found on pages, but not **Start URLs** - if you want to crawl a page, make sure to specify its URL in the **Start URLs** field. \n\nFor example `https://{store,docs}.example.com/**` lets the crawler to access all URLs starting with `https://store.example.com/` or `https://docs.example.com/`, and `https://example.com/**/*\\?*foo=*` allows the crawler to access all URLs that contain `foo` query parameter with any value.\n\nLearn more about globs and test them [here](https://www.digitalocean.com/community/tools/glob?comments=true&glob=https%3A%2F%2Fexample.com%2Fscrape_this%2F%2A%2A&matches=false&tests=https%3A%2F%2Fexample.com%2Ftools%2F&tests=https%3A%2F%2Fexample.com%2Fscrape_this%2F&tests=https%3A%2F%2Fexample.com%2Fscrape_this%2F123%3Ftest%3Dabc&tests=https%3A%2F%2Fexample.com%2Fdont_scrape_this).",
179 "default": [],
180 "items": {
181 "type": "object",
182 "required": [
183 "glob"
184 ],
185 "properties": {
186 "glob": {
187 "type": "string",
188 "title": "Glob of a web page"
189 }
190 }
191 }
192 },
193 "excludeUrlGlobs": {
194 "title": "Exclude URLs (globs)",
195 "type": "array",
196 "description": "Glob patterns matching URLs of pages that will be excluded from crawling. Note that this affects only links found on pages, but not **Start URLs**, which are always crawled. \n\nFor example `https://{store,docs}.example.com/**` excludes all URLs starting with `https://store.example.com/` or `https://docs.example.com/`, and `https://example.com/**/*\\?*foo=*` excludes all URLs that contain `foo` query parameter with any value.\n\nLearn more about globs and test them [here](https://www.digitalocean.com/community/tools/glob?comments=true&glob=https%3A%2F%2Fexample.com%2Fdont_scrape_this%2F%2A%2A&matches=false&tests=https%3A%2F%2Fexample.com%2Ftools%2F&tests=https%3A%2F%2Fexample.com%2Fdont_scrape_this%2F&tests=https%3A%2F%2Fexample.com%2Fdont_scrape_this%2F123%3Ftest%3Dabc&tests=https%3A%2F%2Fexample.com%2Fscrape_this).",
197 "default": [],
198 "items": {
199 "type": "object",
200 "required": [
201 "glob"
202 ],
203 "properties": {
204 "glob": {
205 "type": "string",
206 "title": "Glob of a web page"
207 }
208 }
209 }
210 },
211 "keepUrlFragments": {
212 "title": "URL #fragments identify unique pages",
213 "type": "boolean",
214 "description": "Indicates that URL fragments (e.g. <code>http://example.com<b>#fragment</b></code>) should be included when checking whether a URL has already been visited or not. Typically, URL fragments are used for page navigation only and therefore they should be ignored, as they don't identify separate pages. However, some single-page websites use URL fragments to display different pages; in such a case, this option should be enabled.",
215 "default": false
216 },
217 "ignoreCanonicalUrl": {
218 "title": "Ignore canonical URLs",
219 "type": "boolean",
220 "description": "If enabled, the Actor will ignore the canonical URL reported by the page, and use the actual URL instead. You can use this feature for websites that report invalid canonical URLs, which causes the Actor to skip those pages in results.",
221 "default": false
222 },
223 "maxCrawlDepth": {
224 "title": "Max crawling depth",
225 "minimum": 0,
226 "type": "integer",
227 "description": "The maximum number of links starting from the start URL that the crawler will recursively follow. The start URLs have depth `0`, the pages linked directly from the start URLs have depth `1`, and so on.\n\nThis setting is useful to prevent accidental crawler runaway. By setting it to `0`, the Actor will only crawl the Start URLs.",
228 "default": 20
229 },
230 "maxCrawlPages": {
231 "title": "Max pages",
232 "minimum": 0,
233 "type": "integer",
234 "description": "The maximum number pages to crawl. It includes the start URLs, pagination pages, pages with no content, etc. The crawler will automatically finish after reaching this number. This setting is useful to prevent accidental crawler runaway.",
235 "default": 9999999
236 },
237 "initialConcurrency": {
238 "title": "Initial concurrency",
239 "minimum": 0,
240 "maximum": 999,
241 "type": "integer",
242 "description": "The initial number of web browsers or HTTP clients running in parallel. The system scales the concurrency up and down based on the current CPU and memory load. If the value is set to 0 (default), the Actor uses the default setting for the specific crawler type.\n\nNote that if you set this value too high, the Actor will run out of memory and crash. If too low, it will be slow at start before it scales the concurrency up.",
243 "default": 0
244 },
245 "maxConcurrency": {
246 "title": "Max concurrency",
247 "minimum": 1,
248 "maximum": 999,
249 "type": "integer",
250 "description": "The maximum number of web browsers or HTTP clients running in parallel. This setting is useful to avoid overloading the target websites and to avoid getting blocked.",
251 "default": 200
252 },
253 "initialCookies": {
254 "title": "Initial cookies",
255 "type": "array",
256 "description": "Cookies that will be pre-set to all pages the scraper opens. This is useful for pages that require login. The value is expected to be a JSON array of objects with `name` and `value` properties. For example: `[{\"name\": \"cookieName\", \"value\": \"cookieValue\"}]`.\n\nYou can use the [EditThisCookie](https://chrome.google.com/webstore/detail/editthiscookie/fngmhnnpilhplaeedifhccceomclgfbg) browser extension to copy browser cookies in this format, and paste it here.",
257 "default": []
258 },
259 "proxyConfiguration": {
260 "title": "Proxy configuration",
261 "type": "object",
262 "description": "Enables loading the websites from IP addresses in specific geographies and to circumvent blocking.",
263 "default": {
264 "useApifyProxy": true
265 }
266 },
267 "maxSessionRotations": {
268 "title": "Maximum number of session rotations",
269 "minimum": 0,
270 "maximum": 20,
271 "type": "integer",
272 "description": "The maximum number of times the crawler will rotate the session (IP address + browser configuration) on anti-scraping measures like CAPTCHAs. If the crawler rotates the session more than this number and the page is still blocked, it will finish with an error.",
273 "default": 10
274 },
275 "maxRequestRetries": {
276 "title": "Maximum number of retries on network / server errors",
277 "minimum": 0,
278 "maximum": 20,
279 "type": "integer",
280 "description": "The maximum number of times the crawler will retry the request on network, proxy or server errors. If the (n+1)-th request still fails, the crawler will mark this request as failed.",
281 "default": 5
282 },
283 "requestTimeoutSecs": {
284 "title": "Request timeout",
285 "minimum": 1,
286 "maximum": 600,
287 "type": "integer",
288 "description": "Timeout in seconds for making the request and processing its response. Defaults to 60s.",
289 "default": 60
290 },
291 "minFileDownloadSpeedKBps": {
292 "title": "Minimum file download speed",
293 "type": "integer",
294 "description": "The minimum viable file download speed in kilobytes per seconds. If the file download speed is lower than this value for a prolonged duration, the crawler will consider the file download as failing, abort it, and retry it again (up to \"Maximum number of retries\" times). This is useful to avoid your crawls being stuck on slow file downloads.",
295 "default": 128
296 },
297 "dynamicContentWaitSecs": {
298 "title": "Wait for dynamic content",
299 "type": "integer",
300 "description": "The maximum time in seconds to wait for dynamic page content to load. By default, it is 10 seconds. The crawler will continue processing the page either if this time elapses, or if it detects the network became idle as there are no more requests for additional resources.\n\nWhen using the **Wait for selector** option, the crawler will wait for the selector to appear for this amount of time. If the selector doesn't appear within this period, the request will fail and will be retried.\n\nNote that this setting is ignored for the raw HTTP client, because it doesn't execute JavaScript or loads any dynamic resources. Similarly, if the value is set to `0`, the crawler doesn't wait for any dynamic to load and processes the HTML as provided on load.",
301 "default": 10
302 },
303 "waitForSelector": {
304 "title": "Wait for selector",
305 "type": "string",
306 "description": "If set, the crawler will wait for the specified CSS selector to appear in the page before proceeding with the content extraction. This is useful for pages for which the default content load recognition by idle network fails. Setting this option completely disables the default behavior, and the page will be processed only if the element specified by this selector appears. If the element doesn't appear within the **Wait for dynamic content** timeout, the request will fail and will be retried later. The value must be a valid CSS selector as accepted by the `document.querySelectorAll()` function.\n\nWith the raw HTTP client, this option checks for the presence of the selector in the HTML content and throws an error if it's not found.",
307 "default": ""
308 },
309 "maxScrollHeightPixels": {
310 "title": "Maximum scroll height",
311 "minimum": 0,
312 "type": "integer",
313 "description": "The crawler will scroll down the page until all content is loaded (and network becomes idle), or until this maximum scrolling height is reached. Setting this value to `0` disables scrolling altogether.\n\nNote that this setting is ignored for the raw HTTP client, because it doesn't execute JavaScript or loads any dynamic resources.",
314 "default": 5000
315 },
316 "keepElementsCssSelector": {
317 "title": "Keep HTML elements (CSS selector)",
318 "type": "string",
319 "description": "An optional CSS selector matching HTML elements that should be preserved in the DOM. If provided, all HTML elements which are not matching the CSS selectors or their descendants are removed from the DOM. This is useful to extract only relevant page content. The value must be a valid CSS selector as accepted by the `document.querySelectorAll()` function. \n\nThis option runs before the `HTML transformer` option. If you are missing content in the output despite using this option, try disabling the `HTML transformer`.",
320 "default": ""
321 },
322 "removeElementsCssSelector": {
323 "title": "Remove HTML elements (CSS selector)",
324 "type": "string",
325 "description": "A CSS selector matching HTML elements that will be removed from the DOM, before converting it to text, Markdown, or saving as HTML. This is useful to skip irrelevant page content. The value must be a valid CSS selector as accepted by the `document.querySelectorAll()` function. \n\nBy default, the Actor removes common navigation elements, headers, footers, modals, scripts, and inline image. You can disable the removal by setting this value to some non-existent CSS selector like `dummy_keep_everything`.",
326 "default": "nav, footer, script, style, noscript, svg, img[src^='data:'],\n[role=\"alert\"],\n[role=\"banner\"],\n[role=\"dialog\"],\n[role=\"alertdialog\"],\n[role=\"region\"][aria-label*=\"skip\" i],\n[aria-modal=\"true\"]"
327 },
328 "removeCookieWarnings": {
329 "title": "Remove cookie warnings",
330 "type": "boolean",
331 "description": "If enabled, the Actor will try to remove cookies consent dialogs or modals, using the [I don't care about cookies](https://addons.mozilla.org/en-US/firefox/addon/i-dont-care-about-cookies/) browser extension, to improve the accuracy of the extracted text. Note that there is a small performance penalty if this feature is enabled.\n\nThis setting is ignored when using the raw HTTP crawler type.",
332 "default": true
333 },
334 "expandIframes": {
335 "title": "Expand iframe elements",
336 "type": "boolean",
337 "description": "By default, the Actor will extract content from `iframe` elements. If you want to specifically skip `iframe` processing, disable this option. Works only for the `playwright:firefox` crawler type.",
338 "default": true
339 },
340 "clickElementsCssSelector": {
341 "title": "Expand clickable elements",
342 "type": "string",
343 "description": "A CSS selector matching DOM elements that will be clicked. This is useful for expanding collapsed sections, in order to capture their text content. The value must be a valid CSS selector as accepted by the `document.querySelectorAll()` function. ",
344 "default": "[aria-expanded=\"false\"]"
345 },
346 "htmlTransformer": {
347 "title": "HTML transformer",
348 "enum": [
349 "readableTextIfPossible",
350 "readableText",
351 "extractus",
352 "none"
353 ],
354 "type": "string",
355 "description": "Specify how to transform the HTML to extract meaningful content without any extra fluff, like navigation or modals. The HTML transformation happens after removing and clicking the DOM elements.\n\n- **Readable text with fallback** - Extracts the main contents of the webpage, without navigation and other fluff while carefully checking the content integrality.\n\n- **Readable text** (default) - Extracts the main contents of the webpage, without navigation and other fluff.\n- **Extractus** - Uses Extractus library.\n- **None** - Only removes the HTML elements specified via 'Remove HTML elements' option.\n\nYou can examine output of all transformers by enabling the debug mode.\n",
356 "default": "readableText"
357 },
358 "readableTextCharThreshold": {
359 "title": "Readable text extractor character threshold",
360 "type": "integer",
361 "description": "A configuration options for the \"Readable text\" HTML transformer. It contains the minimum number of characters an article must have in order to be considered relevant.",
362 "default": 100
363 },
364 "aggressivePrune": {
365 "title": "Remove duplicate text lines",
366 "type": "boolean",
367 "description": "This is an **experimental feature**. If enabled, the crawler will prune content lines that are very similar to the ones already crawled on other pages, using the Count-Min Sketch algorithm. This is useful to strip repeating content in the scraped data like menus, headers, footers, etc. In some (not very likely) cases, it might remove relevant content from some pages.",
368 "default": false
369 },
370 "debugMode": {
371 "title": "Debug mode (stores output of all HTML transformers)",
372 "type": "boolean",
373 "description": "If enabled, the Actor will store the output of all types of HTML transformers, including the ones that are not used by default, and it will also store the HTML to Key-value Store with a link. All this data is stored under the `debug` field in the resulting Dataset.",
374 "default": false
375 },
376 "debugLog": {
377 "title": "Debug log",
378 "type": "boolean",
379 "description": "If enabled, the actor log will include debug messages. Beware that this can be quite verbose.",
380 "default": false
381 },
382 "saveHtml": {
383 "title": "Save HTML to dataset (deprecated)",
384 "type": "boolean",
385 "description": "If enabled, the crawler stores full transformed HTML of all pages found to the output dataset under the `html` field. **This option has been deprecated** in favor of the `saveHtmlAsFile` option, because the dataset records have a size of approximately 10MB and it's harder to review the HTML for debugging.",
386 "default": false
387 },
388 "saveHtmlAsFile": {
389 "title": "Save HTML to key-value store",
390 "type": "boolean",
391 "description": "If enabled, the crawler stores full transformed HTML of all pages found to the default key-value store and saves links to the files as `htmlUrl` field in the output dataset. Storing HTML in key-value store is preferred to storing it into the dataset with the `saveHtml` option, because there's no size limit and it's easier for debugging as you can easily view the HTML.",
392 "default": false
393 },
394 "saveMarkdown": {
395 "title": "Save Markdown",
396 "type": "boolean",
397 "description": "If enabled, the crawler converts the transformed HTML of all pages found to Markdown, and stores it under the `markdown` field in the output dataset.",
398 "default": true
399 },
400 "saveFiles": {
401 "title": "Save files",
402 "type": "boolean",
403 "description": "If enabled, the crawler downloads files linked from the web pages, as long as their URL has one of the following file extensions: PDF, DOC, DOCX, XLS, XLSX, and CSV. Note that unlike web pages, the files are downloaded regardless if they are under **Start URLs** or not. The files are stored to the default key-value store, and metadata about them to the output dataset, similarly as for web pages.",
404 "default": false
405 },
406 "saveScreenshots": {
407 "title": "Save screenshots (headless browser only)",
408 "type": "boolean",
409 "description": "If enabled, the crawler stores a screenshot for each article page to the default key-value store. The link to the screenshot is stored under the `screenshotUrl` field in the output dataset. It is useful for debugging, but reduces performance and increases storage costs.\n\nNote that this feature only works with the `playwright:firefox` crawler type.",
410 "default": false
411 },
412 "maxResults": {
413 "title": "Max results",
414 "minimum": 0,
415 "type": "integer",
416 "description": "The maximum number of resulting web pages to store. The crawler will automatically finish after reaching this number. This setting is useful to prevent accidental crawler runaway. If both **Max pages** and **Max results** are defined, then the crawler will finish when the first limit is reached. Note that the crawler skips pages with the canonical URL of a page that has already been crawled, hence it might crawl more pages than there are results.",
417 "default": 9999999
418 },
419 "textExtractor": {
420 "title": "Text extractor (deprecated)",
421 "type": "string",
422 "description": "Removed in favor of the `htmlTransformer` option. Will be removed soon."
423 },
424 "clientSideMinChangePercentage": {
425 "title": "(Adaptive crawling only) Minimum client-side content change percentage",
426 "minimum": 1,
427 "type": "integer",
428 "description": "The least amount of content (as a percentage) change after the initial load required to consider the pages client-side rendered",
429 "default": 15
430 },
431 "renderingTypeDetectionPercentage": {
432 "title": "(Adaptive crawling only) How often should the crawler attempt to detect page rendering type",
433 "minimum": 1,
434 "maximum": 100,
435 "type": "integer",
436 "description": "How often should the adaptive attempt to detect page rendering type",
437 "default": 10
438 }
439 }
440 },
441 "runsResponseSchema": {
442 "type": "object",
443 "properties": {
444 "data": {
445 "type": "object",
446 "properties": {
447 "id": {
448 "type": "string"
449 },
450 "actId": {
451 "type": "string"
452 },
453 "userId": {
454 "type": "string"
455 },
456 "startedAt": {
457 "type": "string",
458 "format": "date-time",
459 "example": "2025-01-08T00:00:00.000Z"
460 },
461 "finishedAt": {
462 "type": "string",
463 "format": "date-time",
464 "example": "2025-01-08T00:00:00.000Z"
465 },
466 "status": {
467 "type": "string",
468 "example": "READY"
469 },
470 "meta": {
471 "type": "object",
472 "properties": {
473 "origin": {
474 "type": "string",
475 "example": "API"
476 },
477 "userAgent": {
478 "type": "string"
479 }
480 }
481 },
482 "stats": {
483 "type": "object",
484 "properties": {
485 "inputBodyLen": {
486 "type": "integer",
487 "example": 2000
488 },
489 "rebootCount": {
490 "type": "integer",
491 "example": 0
492 },
493 "restartCount": {
494 "type": "integer",
495 "example": 0
496 },
497 "resurrectCount": {
498 "type": "integer",
499 "example": 0
500 },
501 "computeUnits": {
502 "type": "integer",
503 "example": 0
504 }
505 }
506 },
507 "options": {
508 "type": "object",
509 "properties": {
510 "build": {
511 "type": "string",
512 "example": "latest"
513 },
514 "timeoutSecs": {
515 "type": "integer",
516 "example": 300
517 },
518 "memoryMbytes": {
519 "type": "integer",
520 "example": 1024
521 },
522 "diskMbytes": {
523 "type": "integer",
524 "example": 2048
525 }
526 }
527 },
528 "buildId": {
529 "type": "string"
530 },
531 "defaultKeyValueStoreId": {
532 "type": "string"
533 },
534 "defaultDatasetId": {
535 "type": "string"
536 },
537 "defaultRequestQueueId": {
538 "type": "string"
539 },
540 "buildNumber": {
541 "type": "string",
542 "example": "1.0.0"
543 },
544 "containerUrl": {
545 "type": "string"
546 },
547 "usage": {
548 "type": "object",
549 "properties": {
550 "ACTOR_COMPUTE_UNITS": {
551 "type": "integer",
552 "example": 0
553 },
554 "DATASET_READS": {
555 "type": "integer",
556 "example": 0
557 },
558 "DATASET_WRITES": {
559 "type": "integer",
560 "example": 0
561 },
562 "KEY_VALUE_STORE_READS": {
563 "type": "integer",
564 "example": 0
565 },
566 "KEY_VALUE_STORE_WRITES": {
567 "type": "integer",
568 "example": 1
569 },
570 "KEY_VALUE_STORE_LISTS": {
571 "type": "integer",
572 "example": 0
573 },
574 "REQUEST_QUEUE_READS": {
575 "type": "integer",
576 "example": 0
577 },
578 "REQUEST_QUEUE_WRITES": {
579 "type": "integer",
580 "example": 0
581 },
582 "DATA_TRANSFER_INTERNAL_GBYTES": {
583 "type": "integer",
584 "example": 0
585 },
586 "DATA_TRANSFER_EXTERNAL_GBYTES": {
587 "type": "integer",
588 "example": 0
589 },
590 "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
591 "type": "integer",
592 "example": 0
593 },
594 "PROXY_SERPS": {
595 "type": "integer",
596 "example": 0
597 }
598 }
599 },
600 "usageTotalUsd": {
601 "type": "number",
602 "example": 0.00005
603 },
604 "usageUsd": {
605 "type": "object",
606 "properties": {
607 "ACTOR_COMPUTE_UNITS": {
608 "type": "integer",
609 "example": 0
610 },
611 "DATASET_READS": {
612 "type": "integer",
613 "example": 0
614 },
615 "DATASET_WRITES": {
616 "type": "integer",
617 "example": 0
618 },
619 "KEY_VALUE_STORE_READS": {
620 "type": "integer",
621 "example": 0
622 },
623 "KEY_VALUE_STORE_WRITES": {
624 "type": "number",
625 "example": 0.00005
626 },
627 "KEY_VALUE_STORE_LISTS": {
628 "type": "integer",
629 "example": 0
630 },
631 "REQUEST_QUEUE_READS": {
632 "type": "integer",
633 "example": 0
634 },
635 "REQUEST_QUEUE_WRITES": {
636 "type": "integer",
637 "example": 0
638 },
639 "DATA_TRANSFER_INTERNAL_GBYTES": {
640 "type": "integer",
641 "example": 0
642 },
643 "DATA_TRANSFER_EXTERNAL_GBYTES": {
644 "type": "integer",
645 "example": 0
646 },
647 "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
648 "type": "integer",
649 "example": 0
650 },
651 "PROXY_SERPS": {
652 "type": "integer",
653 "example": 0
654 }
655 }
656 }
657 }
658 }
659 }
660 }
661 }
662 }
663}
Website Content Crawler OpenAPI definition
OpenAPI is a standard for designing and describing RESTful APIs, allowing developers to define API structure, endpoints, and data formats in a machine-readable way. It simplifies API development, integration, and documentation.
OpenAPI is effective when used with AI agents and GPTs by standardizing how these systems interact with various APIs, for reliable integrations and efficient communication.
By defining machine-readable API specifications, OpenAPI allows AI models like GPTs to understand and use varied data sources, improving accuracy. This accelerates development, reduces errors, and provides context-aware responses, making OpenAPI a core component for AI applications.
You can download the OpenAPI definitions for Website Content Crawler from the options below:
If you’d like to learn more about how OpenAPI powers GPTs, read our blog post.
You can also check out our other API clients:
Actor Metrics
5.4k monthly users
-
990 bookmarks
>99% runs succeeded
1 days response time
Created in Mar 2023
Modified 13 days ago