
Blog / Dated Content Crawler
Pricing
Pay per usage

Blog / Dated Content Crawler
Crawl an entire blog / knowledge base or filter to just the new content. Supporting relevant AI queries by filtering pages by date
5.0 (2)
Pricing
Pay per usage
5
Monthly users
14
Runs succeeded
98%
Last modified
5 days ago
You can access the Blog / Dated Content Crawler programmatically from your own applications by using the Apify API. You can also choose the language preference from below. To use the Apify API, you’ll need an Apify account and your API token, found in Integrations settings in Apify Console.
1{
2 "openapi": "3.0.1",
3 "info": {
4 "version": "0.0",
5 "x-build-id": "TRwyd32uxlYO29IXs"
6 },
7 "servers": [
8 {
9 "url": "https://api.apify.com/v2"
10 }
11 ],
12 "paths": {
13 "/acts/diarmuidr~blog-content-crawler/run-sync-get-dataset-items": {
14 "post": {
15 "operationId": "run-sync-get-dataset-items-diarmuidr-blog-content-crawler",
16 "x-openai-isConsequential": false,
17 "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
18 "tags": [
19 "Run Actor"
20 ],
21 "requestBody": {
22 "required": true,
23 "content": {
24 "application/json": {
25 "schema": {
26 "$ref": "#/components/schemas/inputSchema"
27 }
28 }
29 }
30 },
31 "parameters": [
32 {
33 "name": "token",
34 "in": "query",
35 "required": true,
36 "schema": {
37 "type": "string"
38 },
39 "description": "Enter your Apify token here"
40 }
41 ],
42 "responses": {
43 "200": {
44 "description": "OK"
45 }
46 }
47 }
48 },
49 "/acts/diarmuidr~blog-content-crawler/runs": {
50 "post": {
51 "operationId": "runs-sync-diarmuidr-blog-content-crawler",
52 "x-openai-isConsequential": false,
53 "summary": "Executes an Actor and returns information about the initiated run in response.",
54 "tags": [
55 "Run Actor"
56 ],
57 "requestBody": {
58 "required": true,
59 "content": {
60 "application/json": {
61 "schema": {
62 "$ref": "#/components/schemas/inputSchema"
63 }
64 }
65 }
66 },
67 "parameters": [
68 {
69 "name": "token",
70 "in": "query",
71 "required": true,
72 "schema": {
73 "type": "string"
74 },
75 "description": "Enter your Apify token here"
76 }
77 ],
78 "responses": {
79 "200": {
80 "description": "OK",
81 "content": {
82 "application/json": {
83 "schema": {
84 "$ref": "#/components/schemas/runsResponseSchema"
85 }
86 }
87 }
88 }
89 }
90 }
91 },
92 "/acts/diarmuidr~blog-content-crawler/run-sync": {
93 "post": {
94 "operationId": "run-sync-diarmuidr-blog-content-crawler",
95 "x-openai-isConsequential": false,
96 "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
97 "tags": [
98 "Run Actor"
99 ],
100 "requestBody": {
101 "required": true,
102 "content": {
103 "application/json": {
104 "schema": {
105 "$ref": "#/components/schemas/inputSchema"
106 }
107 }
108 }
109 },
110 "parameters": [
111 {
112 "name": "token",
113 "in": "query",
114 "required": true,
115 "schema": {
116 "type": "string"
117 },
118 "description": "Enter your Apify token here"
119 }
120 ],
121 "responses": {
122 "200": {
123 "description": "OK"
124 }
125 }
126 }
127 }
128 },
129 "components": {
130 "schemas": {
131 "inputSchema": {
132 "type": "object",
133 "required": [
134 "startUrls"
135 ],
136 "properties": {
137 "startUrls": {
138 "title": "Start URLs",
139 "type": "array",
140 "description": "URLs to start with.",
141 "items": {
142 "type": "object",
143 "required": [
144 "url"
145 ],
146 "properties": {
147 "url": {
148 "type": "string",
149 "title": "URL of a web page",
150 "format": "uri"
151 }
152 }
153 }
154 },
155 "relativeStartDate": {
156 "title": "Date",
157 "pattern": "^(\\d+)(\\s*(day|week|month|year)s?)?$",
158 "type": "string",
159 "description": "Optional: Select relative start date by which to filter pages. Any post published before this date will be excluded. This is calculated relative to the time the crawler is run (eg: '2 weeks' would include posts from 2 weeks ago to now). Format in format 'X days' or 'X weeks' or 'X months' or 'X years'. If no time period is passed in, it will be treated as a number of seconds."
160 },
161 "startDate": {
162 "title": "Date",
163 "pattern": "^(\\d{4})-(0[1-9]|1[0-2])-(0[1-9]|[12]\\d|3[01])$",
164 "type": "string",
165 "description": "Optional: Select start date by which to filter pages. Any post published before this date will be excluded. Format in format YYYY-MM-DD"
166 },
167 "dateSelector": {
168 "title": "Date selector",
169 "type": "string",
170 "description": "Optional: CSS Selector to identify where the date is located on the page. This can make the date filtering more accurate if we don't get it right on the first round. You should use this if there's multiple dates on the page."
171 },
172 "includeResultsWithNoDate": {
173 "title": "Include results with no date",
174 "type": "boolean",
175 "description": "Optional: If true, the crawler will include results where no date was detected on the page.",
176 "default": false
177 },
178 "waitForSelector": {
179 "title": "Wait for selector",
180 "type": "string",
181 "description": "CSS Selector to wait for for page load. Useful when content on the page loads asynchronously. This is optional and in most cases won't be needed."
182 },
183 "removeElementsCssSelector": {
184 "title": "Remove HTML elements",
185 "type": "string",
186 "description": "Optional: CSS Selector to identify elements to remove from the page. This is useful for removing elements that are not needed for the content. For example, you can remove the header and footer of the page.",
187 "default": "nav, footer, script, style, noscript, svg, img[src^='data:'],\n[role=\"alert\"],\n[role=\"banner\"],\n[role=\"dialog\"],\n[role=\"alertdialog\"],\n[role=\"region\"][aria-label*=\"skip\" i],\n[aria-modal=\"true\"]"
188 },
189 "requestTimeoutSecs": {
190 "title": "Request Timeout",
191 "minimum": 1,
192 "maximum": 600,
193 "type": "integer",
194 "description": "Optional: How long the crawler will wait for a request to complete. Default is 60 seconds.",
195 "default": 60
196 },
197 "dynamicContentWaitSecs": {
198 "title": "Wait for dynamic content",
199 "minimum": 1,
200 "maximum": 60,
201 "type": "integer",
202 "description": "Optional: Used in conjunction with `waitForSelector`. How long the crawler will wait for the selector to appear. Default is 10 seconds.",
203 "default": 10
204 },
205 "useSitemaps": {
206 "title": "Use Sitemaps",
207 "type": "boolean",
208 "description": "Optional: If checked, the crawler will use the sitemap.xml file to find additional URLs to crawl.",
209 "default": true
210 },
211 "filterByDocumentLastModified": {
212 "title": "Filter by document last modified",
213 "type": "boolean",
214 "description": "Optional: If true, the crawler will include the document's 'last modified' date in the output.",
215 "default": false
216 },
217 "proxyConfiguration": {
218 "title": "Proxy configuration",
219 "type": "object",
220 "description": "Select proxies to be used by your crawler."
221 },
222 "crawlerType": {
223 "title": "Crawler Type",
224 "enum": [
225 "adaptive:firefox",
226 "adaptive:chrome",
227 "playwright:firefox",
228 "playwright:chrome"
229 ],
230 "type": "string",
231 "description": "Select the crawler type to use. Adaptive crawlers run faster by automatically switching between raw http and browser requests. Firefox is less likely to be blocked by websites but may ocassionally fail in comparison to chrome. NOTE: You cannot use the adaptive crawler options with the 'filter by document last modified' option.",
232 "default": "adaptive:firefox"
233 },
234 "selectUrlsBy": {
235 "title": "Select URLs by",
236 "enum": [
237 "subpath",
238 "all",
239 "same-hostname",
240 "same-domain",
241 "same-origin"
242 ],
243 "type": "string",
244 "description": "Optional: Select URLs by different kinds of patterns. This is similar to includeUrlGlobs but allows for simpler selection. If `includeUrlGlobs` is provided, this option is ignored.\n Subpath (default): Selects URLs that are under the path of the startUrls provided.\n All: Selects all URLs found.\n Same Hostname: Selects URLs that have the same hostname (does not include subdomains).\n Same Domain: Selects URLs that have the same domain (i.e includes subdomains).\n Same Origin: Selects URLs that have the same hostname and protocol (http/https).",
245 "default": "subpath"
246 },
247 "includeUrlGlobs": {
248 "title": "Include URL globs",
249 "type": "array",
250 "description": "Optional: An array of URL globs to include in the crawl. If not provided, the crawler will include only urls under the path of the startUrls provided. You can test globs on https://www.digitalocean.com/community/tools/glob.",
251 "items": {
252 "type": "object",
253 "required": [
254 "glob"
255 ],
256 "properties": {
257 "glob": {
258 "type": "string",
259 "title": "Glob of a web page"
260 }
261 }
262 }
263 },
264 "excludeUrlGlobs": {
265 "title": "Exclude URL globs",
266 "type": "array",
267 "description": "Optional: An array of URL globs to exclude from the crawl in order to avoid crawling a particular page or set of pages.",
268 "items": {
269 "type": "object",
270 "required": [
271 "glob"
272 ],
273 "properties": {
274 "glob": {
275 "type": "string",
276 "title": "Glob of a web page"
277 }
278 }
279 }
280 },
281 "maxCrawlPages": {
282 "title": "Maximum number of pages to crawl",
283 "type": "integer",
284 "description": "Optional: The maximum number of pages to visit. This is useful for limiting the crawl to a specific number of pages to avoid overspending."
285 }
286 }
287 },
288 "runsResponseSchema": {
289 "type": "object",
290 "properties": {
291 "data": {
292 "type": "object",
293 "properties": {
294 "id": {
295 "type": "string"
296 },
297 "actId": {
298 "type": "string"
299 },
300 "userId": {
301 "type": "string"
302 },
303 "startedAt": {
304 "type": "string",
305 "format": "date-time",
306 "example": "2025-01-08T00:00:00.000Z"
307 },
308 "finishedAt": {
309 "type": "string",
310 "format": "date-time",
311 "example": "2025-01-08T00:00:00.000Z"
312 },
313 "status": {
314 "type": "string",
315 "example": "READY"
316 },
317 "meta": {
318 "type": "object",
319 "properties": {
320 "origin": {
321 "type": "string",
322 "example": "API"
323 },
324 "userAgent": {
325 "type": "string"
326 }
327 }
328 },
329 "stats": {
330 "type": "object",
331 "properties": {
332 "inputBodyLen": {
333 "type": "integer",
334 "example": 2000
335 },
336 "rebootCount": {
337 "type": "integer",
338 "example": 0
339 },
340 "restartCount": {
341 "type": "integer",
342 "example": 0
343 },
344 "resurrectCount": {
345 "type": "integer",
346 "example": 0
347 },
348 "computeUnits": {
349 "type": "integer",
350 "example": 0
351 }
352 }
353 },
354 "options": {
355 "type": "object",
356 "properties": {
357 "build": {
358 "type": "string",
359 "example": "latest"
360 },
361 "timeoutSecs": {
362 "type": "integer",
363 "example": 300
364 },
365 "memoryMbytes": {
366 "type": "integer",
367 "example": 1024
368 },
369 "diskMbytes": {
370 "type": "integer",
371 "example": 2048
372 }
373 }
374 },
375 "buildId": {
376 "type": "string"
377 },
378 "defaultKeyValueStoreId": {
379 "type": "string"
380 },
381 "defaultDatasetId": {
382 "type": "string"
383 },
384 "defaultRequestQueueId": {
385 "type": "string"
386 },
387 "buildNumber": {
388 "type": "string",
389 "example": "1.0.0"
390 },
391 "containerUrl": {
392 "type": "string"
393 },
394 "usage": {
395 "type": "object",
396 "properties": {
397 "ACTOR_COMPUTE_UNITS": {
398 "type": "integer",
399 "example": 0
400 },
401 "DATASET_READS": {
402 "type": "integer",
403 "example": 0
404 },
405 "DATASET_WRITES": {
406 "type": "integer",
407 "example": 0
408 },
409 "KEY_VALUE_STORE_READS": {
410 "type": "integer",
411 "example": 0
412 },
413 "KEY_VALUE_STORE_WRITES": {
414 "type": "integer",
415 "example": 1
416 },
417 "KEY_VALUE_STORE_LISTS": {
418 "type": "integer",
419 "example": 0
420 },
421 "REQUEST_QUEUE_READS": {
422 "type": "integer",
423 "example": 0
424 },
425 "REQUEST_QUEUE_WRITES": {
426 "type": "integer",
427 "example": 0
428 },
429 "DATA_TRANSFER_INTERNAL_GBYTES": {
430 "type": "integer",
431 "example": 0
432 },
433 "DATA_TRANSFER_EXTERNAL_GBYTES": {
434 "type": "integer",
435 "example": 0
436 },
437 "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
438 "type": "integer",
439 "example": 0
440 },
441 "PROXY_SERPS": {
442 "type": "integer",
443 "example": 0
444 }
445 }
446 },
447 "usageTotalUsd": {
448 "type": "number",
449 "example": 0.00005
450 },
451 "usageUsd": {
452 "type": "object",
453 "properties": {
454 "ACTOR_COMPUTE_UNITS": {
455 "type": "integer",
456 "example": 0
457 },
458 "DATASET_READS": {
459 "type": "integer",
460 "example": 0
461 },
462 "DATASET_WRITES": {
463 "type": "integer",
464 "example": 0
465 },
466 "KEY_VALUE_STORE_READS": {
467 "type": "integer",
468 "example": 0
469 },
470 "KEY_VALUE_STORE_WRITES": {
471 "type": "number",
472 "example": 0.00005
473 },
474 "KEY_VALUE_STORE_LISTS": {
475 "type": "integer",
476 "example": 0
477 },
478 "REQUEST_QUEUE_READS": {
479 "type": "integer",
480 "example": 0
481 },
482 "REQUEST_QUEUE_WRITES": {
483 "type": "integer",
484 "example": 0
485 },
486 "DATA_TRANSFER_INTERNAL_GBYTES": {
487 "type": "integer",
488 "example": 0
489 },
490 "DATA_TRANSFER_EXTERNAL_GBYTES": {
491 "type": "integer",
492 "example": 0
493 },
494 "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
495 "type": "integer",
496 "example": 0
497 },
498 "PROXY_SERPS": {
499 "type": "integer",
500 "example": 0
501 }
502 }
503 }
504 }
505 }
506 }
507 }
508 }
509 }
510}
Blog / Dated Content Crawler OpenAPI definition
OpenAPI is a standard for designing and describing RESTful APIs, allowing developers to define API structure, endpoints, and data formats in a machine-readable way. It simplifies API development, integration, and documentation.
OpenAPI is effective when used with AI agents and GPTs by standardizing how these systems interact with various APIs, for reliable integrations and efficient communication.
By defining machine-readable API specifications, OpenAPI allows AI models like GPTs to understand and use varied data sources, improving accuracy. This accelerates development, reduces errors, and provides context-aware responses, making OpenAPI a core component for AI applications.
You can download the OpenAPI definitions for Blog / Dated Content Crawler from the options below:
If you’d like to learn more about how OpenAPI powers GPTs, read our blog post.
You can also check out our other API clients:
Pricing
Pricing model
Pay per usageThis Actor is paid per platform usage. The Actor is free to use, and you only pay for the Apify platform usage.