
Playwright Scraper
Pricing
Pay per usage

Playwright Scraper
Crawls websites with the headless Chromium, Chrome, or Firefox browser and Playwright library using a provided server-side Node.js code. Supports both recursive crawling and a list of URLs. Supports login to a website.
4.3 (7)
Pricing
Pay per usage
43
Total users
1.6K
Monthly users
266
Runs succeeded
99%
Issues response
29 days
Last modified
16 days ago
You can access the Playwright Scraper programmatically from your own applications by using the Apify API. You can also choose the language preference from below. To use the Apify API, you’ll need an Apify account and your API token, found in Integrations settings in Apify Console.
{ "mcpServers": { "local-actors-mcp-server": { "command": "npx", "args": [ "-y", "@apify/actors-mcp-server", "--actors", "apify/playwright-scraper" ], "env": { "APIFY_TOKEN": "<YOUR_API_TOKEN>" } } }}
Configure MCP server with Playwright Scraper
You can interact with the MCP server via standard input/output - stdio (as shown above), which is ideal for local integrations and command-line tools such as the Claude desktop client, or you can interact with the server through Server-Sent Events (SSE) to send messages and receive responses, which looks as follows:
{ "mcpServers": { "remote-actors-mcp-server": { "type": "sse", "url": "https://mcp.apify.com/sse?actors=apify/playwright-scraper", "headers": { "Authorization": "Bearer <YOUR_API_TOKEN>" } } }}
You can connect to the Apify MCP Server using clients like Tester MCP Client, or any other supported MCP client of your choice.
If you want to learn more about our Apify MCP implementation, check out our MCP documentation. To learn more about the Model Context Protocol in general, refer to the official MCP documentation or read our blog post.