Download names from Amazon with Python
Use Amazon Bestsellers Scraper to get names from Amazon with Python. Want to grab names from Amazon? Amazon Bestsellers Scraper makes it quick and easy. Just tell it what to download and you’ll get your Amazon name available offline, for whenever you want it.
1Get an Apify account
You can’t get data from the inside of the platform if you’re not authorized in it. So to get started, create an Apify account. It only takes a minute and it's free of charge.
Sign up for free2Initialize the API using your token
After you’ve registered, it’s time to add your secret authentication token. You can find your API token on the Integrations page in Apify Console.
Get your token in Console3Define input and copy it in JSON
To get the data from Amazon you first need to use Amazon Bestsellers Scraper to extract it. So let’s add a simple input and transfer it to your code. You can copy your input as a JSON from the Amazon Bestsellers Scraper’s Input tab in Console.
4Integrate Apify into your codebase
Finally, call the Amazon Bestsellers Scraper from your Python project. Use Apify Client or Endpoints. You’ll be able to export scraped Amazon data in no time by running the sample code below ↓.
5Monitor your Amazon Bestsellers Scraper runs
Head over to our dashboard and see how Amazon Bestsellers Scraper runs are executed in real time. Here you can also download the run logs and keep an eye on the API’s performance.
Go to dashboardGet your Python project up and running
Add-on to step 4: start your Python project by executing this code snippet in your go-to environment.
1from apify_client import ApifyClient
2
3# Initialize the ApifyClient with your Apify API token
4# Replace '<YOUR_API_TOKEN>' with your token.
5client = ApifyClient("<YOUR_API_TOKEN>")
6
7# Prepare the Actor input
8run_input = {
9 "categoryUrls": ["https://www.amazon.com/Best-Sellers-Electronics/zgbs/electronics/"],
10 "maxItemsPerStartUrl": 100,
11 "depthOfCrawl": 1,
12}
13
14# Run the Actor and wait for it to finish
15run = client.actor("junglee/amazon-bestsellers").call(run_input=run_input)
16
17# Fetch and print Actor results from the run's dataset (if there are any)
18print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
19for item in client.dataset(run["defaultDatasetId"]).iterate_items():
20 print(item)
21
22# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start
Enjoy $5 of free platform usage every month to explore and kickstart your projects.
Get started on Apify instantly without the hassle of entering your credit card information.
Join our Discord community to ask questions, share ideas, and connect with developers.
Amazon Scraper
junglee/free-amazon-product-scraper
Gets you product data from Amazon. Unofficial API. Scrapes and downloads product information without using the Amazon API, including reviews, prices, descriptions, and ASIN.
3.9k
50
Amazon Reviews Scraper
junglee/amazon-reviews-scraper
Amazon scraper to extract reviews from Amazon products. Scrape and download detailed reviews without using the Amazon API, including rating score, review description, reactions and images. Download your data as HTML table, JSON, CSV, Excel, XML.
3.3k
57
Amazon Explorer
jupri/amazon-explorer
Scrape product data from Amazon.com
228
3
Ready to start downloading Amazon names?
You just need a free Apify account
Kick Game Scraper
mshopik/kick-game-scraper
Scrape Kick Game and extract data on footwear from kickgame.co.uk. Our Kick Game API lets you crawl product information and pricing. The saved data can be downloaded as HTML, JSON, CSV, Excel, and XML.
5
1
Website Content Crawler
apify/website-content-crawler
Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗 LangChain, LlamaIndex, and the wider LLM ecosystem.
28.6k
722
Apify’s wide range of tools use a technique called web scraping to extract public data from websites. These scrapers access the website the same way as you would with a browser, find the image, video, or text you want, and download it for you. They’re a fast and efficient way to get data at scale.
Web scraping is a handy method for collecting information from various websites. It's like having a digital assistant that visits web pages on your behalf, pulling out the details you need such as prices, descriptions, addresses, and contact information. But it's more than just text; this tool can also download images and videos, making it a comprehensive way to gather content from the online world. It takes care of all the complex, technical parts, so you don't have to.
Web scraping is a method where you choose websites to collect specific content, including text, images, and videos. You begin by identifying the web pages that host the visual media you're interested in. Next, you use a web scraping tool tailored to locate the parts of the page containing the images or videos you want to download. Once the tool is set up and run, it navigates to the chosen web pages, identifies the images and videos, and downloads them for you. It's a streamlined way to gather pictures and videos from online sources without having to manually download each item.
Yes, web scraping is legal for gathering public information from websites. But be careful with personal or confidential data, as well as intellectual property, because laws and regulations might protect them. It's good practice to check the website's rules or terms of service to know what's allowed. If you're not sure, getting legal advice can help ensure you're using web scraping correctly and within the law.
Actors are serverless cloud programs that run on the Apify platform and do computing jobs. They’re called Actors because, like human actors, they perform actions based on a script. They can perform anything from simple actions (such as filling out a web form or sending an email) to complex operations (such as crawling an entire website or removing duplicates from a large dataset). Actor runs can be as short or as long as necessary. They could last seconds, hours, or even run infinitely.