Camoufox Scraper avatar
Camoufox Scraper

Pricing

Pay per usage

Go to Store
Camoufox Scraper

Camoufox Scraper

Developed by

Apify

Maintained by Community

Crawls websites with stealthy Camoufox browser and Playwright library using a provided server-side Node.js code. Supports both recursive crawling and a list of URLs. Supports login to a website.

0.0 (0)

Pricing

Pay per usage

0

Monthly users

2

Runs succeeded

>99%

Last modified

3 days ago

Start URLs

startUrlsarrayRequired

URLs to start with

Glob Patterns

globsarrayOptional

Glob patterns to match links in the page that you want to enqueue. Combine with Link selector to tell the scraper where to find links. Omitting the Glob patterns will cause the scraper to enqueue all links matched by the Link selector.

Default value of this property is []

Pseudo-URLs

pseudoUrlsarrayOptional

Pseudo-URLs to match links in the page that you want to enqueue. Combine with Link selector to tell the scraper where to find links. Omitting the Pseudo-URLs will cause the scraper to enqueue all links matched by the Link selector.

Default value of this property is []

Exclude Glob Patterns

excludesarrayOptional

Glob patterns to match links in the page that you want to exclude from being enqueued.

Default value of this property is []

Link selector

linkSelectorstringOptional

CSS selector matching elements with 'href' attributes that should be enqueued. To enqueue urls from

Keep URL fragments

keepUrlFragmentsbooleanOptional

URL fragments (the parts of URL after a #) are not considered when the scraper determines whether a URL has already been visited. This means that when adding URLs such as https://example.com/#foo and https://example.com/#bar, only the first will be visited. Turn this option on to tell the scraper to visit both.

Default value of this property is false

Respect the robots.txt file

respectRobotsTxtFilebooleanOptional

If enabled, the crawler will consult the robots.txt file for the target website before crawling each page. At the moment, the crawler does not use any specific user agent identifier. The crawl-delay directive is also not supported yet.

Default value of this property is false

Page function

pageFunctionstringRequired

Function executed for each request

Proxy configuration

proxyConfigurationobjectRequired

Specifies proxy servers that will be used by the scraper in order to hide its origin.

For details, see Proxy configuration in README.

Default value of this property is {"useApifyProxy":true}

Proxy rotation

proxyRotationEnumOptional

This property indicates the strategy of proxy rotation and can only be used in conjunction with Apify Proxy. The recommended setting automatically picks the best proxies from your available pool and rotates them evenly, discarding proxies that become blocked or unresponsive. If this strategy does not work for you for any reason, you may configure the scraper to either use a new proxy for each request, or to use one proxy as long as possible, until the proxy fails. IMPORTANT: This setting will only use your available Apify Proxy pool, so if you don't have enough proxies for a given task, no rotation setting will produce satisfactory results.

Value options:

"RECOMMENDED": string"PER_REQUEST": string"UNTIL_FAILURE": string

Default value of this property is "RECOMMENDED"

Session pool name

sessionPoolNamestringOptional

Use only english alphanumeric characters dashes and underscores. A session is a representation of a user. It has it's own IP and cookies which are then used together to emulate a real user. Usage of the sessions is controlled by the Proxy rotation option. By providing a session pool name, you enable sharing of those sessions across multiple Actor runs. This is very useful when you need specific cookies for accessing the websites or when a lot of your proxies are already blocked. Instead of trying randomly, a list of working sessions will be saved and a new Actor run can reuse those sessions. Note that the IP lock on sessions expires after 24 hours, unless the session is used again in that window.

Initial cookies

initialCookiesarrayOptional

The provided cookies will be pre-set to all pages the scraper opens.

Default value of this property is []

Run browsers in headless mode

headlessbooleanOptional

By default, browsers run in headless mode. You can toggle this off to run them in headful mode, which can help with certain rare anti-scraping protections but is slower and more costly.

Default value of this property is true

Ignore SSL errors

ignoreSslErrorsbooleanOptional

Scraper will ignore SSL certificate errors.

Default value of this property is false

Download media

downloadMediabooleanOptional

Scraper will download media such as images, fonts, videos and sounds. Disabling this may speed up the scrape, but certain websites could stop working correctly.

Default value of this property is true

Max request retries

maxRequestRetriesintegerOptional

Maximum number of times the request for the page will be retried in case of an error. Setting it to 0 means that the request will be attempted once and will not be retried if it fails.

Default value of this property is 3

Max pages per run

maxPagesPerCrawlintegerOptional

Maximum number of pages that the scraper will open. 0 means unlimited.

Default value of this property is 0

Max result records

maxResultsPerCrawlintegerOptional

Maximum number of results that will be saved to dataset. The scraper will terminate afterwards. 0 means unlimited.

Default value of this property is 0

Max crawling depth

maxCrawlingDepthintegerOptional

Defines how many links away from the StartURLs will the scraper descend. 0 means unlimited.

Default value of this property is 0

Max concurrency

maxConcurrencyintegerOptional

Defines how many pages can be processed by the scraper in parallel. The scraper automatically increases and decreases concurrency based on available system resources. Use this option to set a hard limit.

Default value of this property is 50

Page load timeout

pageLoadTimeoutSecsintegerOptional

Maximum time the scraper will allow a web page to load in seconds.

Default value of this property is 60

Page function timeout

pageFunctionTimeoutSecsintegerOptional

Maximum time the scraper will wait for the page function to execute in seconds.

Default value of this property is 60

Navigation wait until

waitUntilEnumOptional

The scraper will wait until the selected events are triggered in the page before executing the page function. Available events are domcontentloaded, load and networkidle See Playwright docs.

Value options:

"networkidle": string"load": string"domcontentloaded": string

Default value of this property is "networkidle"

Pre-navigation hooks

preNavigationHooksstringOptional

Async functions that are sequentially evaluated before the navigation. Good for setting additional cookies or browser properties before navigation. The function accepts two parameters, crawlingContext and gotoOptions, which are passed to the page.goto() function the crawler calls to navigate.

Post-navigation hooks

postNavigationHooksstringOptional

Async functions that are sequentially evaluated after the navigation. Good for checking if the navigation was successful. The function accepts crawlingContext as the only parameter.

Dismiss cookie modals

closeCookieModalsbooleanOptional

Using the I don't care about cookies browser extension. When on, the crawler will automatically try to dismiss cookie consent modals. This can be useful when crawling European websites that show cookie consent modals.

Default value of this property is false

Maximum scrolling distance in pixels

maxScrollHeightPixelsintegerOptional

The crawler will scroll down the page until all content is loaded or the maximum scrolling distance is reached. Setting this to 0 disables scrolling altogether.

Default value of this property is 5000

Debug log

debugLogbooleanOptional

Debug messages will be included in the log. Use context.log.debug('message') to log your own debug messages.

Default value of this property is false

Browser log

browserLogbooleanOptional

Console messages from the Browser will be included in the log. This may result in the log being flooded by error messages, warnings and other messages of little value, especially with high concurrency.

Default value of this property is false

Custom data

customDataobjectOptional

This object will be available on pageFunction's context as customData.

Default value of this property is {}

Dataset name

datasetNamestringOptional

Name or ID of the dataset that will be used for storing results. If left empty, the default dataset of the run will be used.

Key-value store name

keyValueStoreNamestringOptional

Name or ID of the key-value store that will be used for storing records. If left empty, the default key-value store of the run will be used.

Request queue name

requestQueueNamestringOptional

Name of the request queue that will be used for storing requests. If left empty, the default request queue of the run will be used.

Emulated operating system

osarrayOptional

Operating system to use for the fingerprint generation. Can be 'windows', 'macos', 'linux', or a list to randomly choose from.

Block image loading

block_imagesbooleanOptional

Blocks the image loading on the page. Saves bandwidth and speeds up the scraping, but might cause detection of the scraper.

Default value of this property is false

Block WebRTC

block_webrtcbooleanOptional

Blocks WebRTC capabilities in Camoufox. Note that this might break some web applications.

Default value of this property is false

Block WebGL

block_webglbooleanOptional

Blocks WebGL capabilities in Camoufox. Note that this might break some websites.

Disable Cross-Origin-Opener-Policy

disable_coopbooleanOptional

Disables the Cross-Origin-Opener-Policy.

GeoIP (⚠️ might not fully work with Apify Proxy ⚠️).

geoipbooleanOptional

Calculate geo-data based on IP address. Might not fully work with Apify Proxy.

Default value of this property is false

Humanize cursor movement (speed in seconds)

humanizestringOptional

Humanize cursor movement.

Locales

localearrayOptional

Locales to use in the browser / OS.

Fonts

fontsarrayOptional

Fonts to load into the browser / OS.

Custom fonts only

custom_fonts_onlybooleanOptional

If enabled, only custom fonts (from the option fonts) are used. This will disable the default OS fonts.

Enable cache

enable_cachebooleanOptional

Cache previous pages and requests in Camoufox.

Camoufox debug

debugbooleanOptional

Prints the config being sent to Camoufox.

Pricing

Pricing model

Pay per usage

This Actor is paid per platform usage. The Actor is free to use, and you only pay for the Apify platform usage.