Page Scraping Analyzer
No credit card required
Page Scraping Analyzer
No credit card required
Performs analysis of a webpage to figure out the best way how to scrape its data. Provide a URL and data points to find and get back a detailed dashboard showing how the data can be scraped. Works with initial and rendered HTML, JavaScript variables and dynamically loaded data.
Do you want to learn more about this Actor?
Get a demoPage Scraping Analyzer is an actor that helps its users find data sources on a website. Its main purpose is to help a user quickly analyze their options for extracting data from a website and provide CSS selectors, JavaScript code and HTTP requests that can be used to extract the data.
When to use Page Scraping Analyzer
Page Scraping Analyzer can be used as a first step in a web scraper developement. Its goal is to automate the process of analyzing a website manually using tools like browsers developer tools or Postman to:
- Analyze the structure of the website
- Find the CSS selectors of HTML elements containing a keyword
- Find keywords in additional sources that might not be visible on the screen like JSON+LD, metadata, schema.org data
- Observe and replicate XHR requests that might contain the data a user wants to scrape
Where is data stored on a website?
There are many sources of data on a website, some are not even visible on the screen. The same data point can be present in more than one source.
Here are some examples of where data can be stored on a website:
- Initial HTML response (can be scraped by HTTP-only scrapers like Cheerio)
- HTML elements rendered on the server
- Rich JSON data inside
<script>
tags (JSON+LD, schema.org, Next.js data)
- Rendered HTML (can be only scraped with a browser)
- HTML elements rendered on the client
- JavaScript variables available on the
window
object - data for can come from either:- Initial HTML response - Can be parsed from the
script
tags with HTTP only - XHR responses - Loaded later after the initial HTML response
- Initial HTML response - Can be parsed from the
- XHR responses (can be scraped with HTTP-only scrapers like Cheerio)
- Usually comes as JSON data loaded from an internal API. Common formats are:
- REST API
- GraphQL API
- WebSocket connections
- Can be in any other format like HTML snippets
- Usually comes as JSON data loaded from an internal API. Common formats are:
How Page Scraping Analyzer works
The Page Scraping Analyzer works in multiple steps looking for data sources. For every step, it stores the sources and provides a CSS selector, JavaScript code or an HTTP request that can be used to extract the data.
It uses both browser and HTTP to provide all options to scrape the available data.
With browser:
- Open the page and records the initial HTML response. Finds all HTML elements and
<script>
tags containing the keywords. - Waits for the page to render. Finds all HTML elements and JavaScript variables containing the keywords. Stores a diff between the initial HTML response and the rendered HTML.
- Waits for the page to load all XHR requests. Finds all XHR responses containing the keywords.
With HTTP:
- The same as step 1 with the browser - It is useful to know that you can scrape the same data with HTTP-only scrapers like Cheerio because it is much faster and cheaper.
- Tries to replicate the XHR requests recorded by the browser to see if they can be scraped only with HTTP:
- First tries to use only generic HTTP headers
- If it fails, it tries to use the headers recorded by the browser without cookies
- If it fails, it tries to use the headers recorded by the browser with cookies (this can still be automated but requires to get cookies from the browser and then use them for X HTTP requests)
What scraping methods to choose after analysis
Some websites will require to combine multiple sources of data. Some are faster & cheaper to use, some are in nicer formats. Generally, it is best to try them in this order:
- XHR requests with HTTP - extremely fast and cheap and usually in a nice format like JSON. Might require combining multiple requests to get all the data. Might require complex headers and body to be replicated.
<script>
tags from the initial HTML response - often contains all the data in a nice JSON format. Requires parsing the JSON out of the script text- HTML elements from the initial HTML response - requires using multiple CSS selectors to get all the data.
- JavaScript variables from the rendered HTML - usually contains all data in nice JavaScript objects.
- HTML elements from the rendered HTML - requires using multiple CSS selectors to get all the data.
- Intercepting XHR requests with a browser - requires waiting and sometimes interaction with the page. Might require combining multiple requests to get all the data.
- HTML elements rendered after all XHR responses were processed - requires long waiting and sometimes interaction with the page. Requires using multiple CSS selectors to get all the data.
Actor Metrics
19 monthly users
-
10 stars
95% runs succeeded
Created in Feb 2018
Modified 5 months ago