Bloxy Tokens Info avatar
Bloxy Tokens Info

Pricing

Pay per usage

Go to Store
Bloxy Tokens Info

Bloxy Tokens Info

Developed by

Zhen

Zhen

Maintained by Community

Get all tokens name and address contract from Bloxy

0.0 (0)

Pricing

Pay per usage

1

Total users

5

Monthly users

1

Runs succeeded

>99%

Last modified

3 years ago

Dockerfile

# This is a template for a Dockerfile used to run acts in Actor system.
# The base image name below is set during the act build, based on user settings.
# IMPORTANT: The base image must set a correct working directory, such as /usr/src/app or /home/user
FROM apify/actor-node-puppeteer-chrome
# Second, copy just package.json and package-lock.json since it should be
# the only file that affects "npm install" in the next step, to speed up the build
COPY package*.json ./
# Install NPM packages, skip optional and development dependencies to
# keep the image small. Avoid logging too much and print the dependency
# tree for debugging
RUN npm --quiet set progress=false \
&& npm install --only=prod --no-optional \
&& echo "Installed NPM packages:" \
&& (npm list --all || true) \
&& echo "Node.js version:" \
&& node --version \
&& echo "NPM version:" \
&& npm --version
# Copy source code to container
# Do this in the last step, to have fast build if only the source code changed
COPY --chown=myuser:myuser . ./
# NOTE: The CMD is already defined by the base image.
# Uncomment this for local node inspector debugging:
# CMD [ "node", "--inspect=0.0.0.0:9229", "main.js" ]

package.json

{
"name": "apify-project",
"version": "0.0.1",
"description": "",
"author": "It's not you it's me",
"license": "ISC",
"dependencies": {
"apify": "latest"
},
"scripts": {
"start": "node main.js"
}
}

main.js

1const Apify = require('apify');
2
3Apify.main(async () => {
4 // Apify.openRequestQueue() creates a preconfigured RequestQueue instance.
5 // We add our first request to it - the initial page the crawler will visit.
6 const requestQueue = await Apify.openRequestQueue();
7 await requestQueue.addRequest({ url: 'https://bloxy.info/list_tokens/ERC20?page=1' });
8 // Create an instance of the PlaywrightCrawler class - a crawler
9 // that automatically loads the URLs in headless Chrome / Playwright.
10 const input = await Apify.getInput();
11 const crawler = new Apify.PuppeteerCrawler({
12 requestQueue,
13 launchContext: {
14 // Here you can set options that are passed to the playwright .launch() function.
15 launchOptions: {
16 headless: true,
17 },
18 },
19
20 // Stop crawling after several pages
21 maxRequestsPerCrawl: input.maxPage || 50,
22 handlePageTimeoutSecs: 30,
23
24 handlePageFunction: async ({ request, page }) => {
25 console.log(`Processing ${request.url}...`);
26
27 // A function to be evaluated by Playwright within the browser context.
28 const data = await page.$$eval('tbody tr', $posts => {
29 const scrapedData = [];
30 // // We're getting the title, rank and URL of each post on Hacker News.
31 $posts.forEach($post => {
32 let linkContract = $post.querySelector('td:nth-child(1) a').href
33 let contract = linkContract.split('token_holders/')[1]
34 scrapedData.push({
35 title: $post.querySelector('td:nth-child(1)').textContent.trim(),
36 contract,
37 type: $post.querySelector('td:nth-child(2)').textContent.trim(),
38 senders_receivers: $post.querySelector('td:nth-child(3)').textContent.trim(),
39 transfers7days: $post.querySelector('td:nth-child(4)').textContent.trim(),
40 volume7days: $post.querySelector('td:nth-child(5)').textContent.trim(),
41 topTransferTx: $post.querySelector('td:nth-child(6)').textContent.trim(),
42 });
43 });
44
45 return scrapedData;
46 });
47 console.log(data);
48 // Store the results to the default dataset.
49 await Apify.pushData(data);
50
51 // Find a link to the next page and enqueue it if it exists.
52 const infos = await Apify.utils.enqueueLinks({
53 page,
54 requestQueue,
55 selector: '.next_page a',
56 });
57
58 if (infos.length === 0) console.log(`${request.url} is the last page!`);
59 },
60
61 // This function is called if the page processing failed more than maxRequestRetries+1 times.
62 handleFailedRequestFunction: async ({ request }) => {
63 console.log(`Request ${request.url} failed too many times.`);
64 },
65 });
66
67 // Run the crawler and wait for it to finish.
68 await crawler.run();
69
70 console.log('Crawler finished.');
71});