Download comments from Linkedin with Python
Use Linkedin Company Ads to get comments from Linkedin with Python. Want to grab comments from Linkedin? Linkedin Company Ads makes it quick and easy. Just tell it what to download and you’ll get your Linkedin comment available offline, for whenever you want it.
1Get an Apify account
You can’t get data from the inside of the platform if you’re not authorized in it. So to get started, create an Apify account. It only takes a minute and it's free of charge.
Sign up for free2Initialize the API using your token
After you’ve registered, it’s time to add your secret authentication token. You can find your API token on the Integrations page in Apify Console.
Get your token in Console3Define input and copy it in JSON
To get the data from Linkedin you first need to use Linkedin Company Ads to extract it. So let’s add a simple input and transfer it to your code. You can copy your input as a JSON from the Linkedin Company Ads’s Input tab in Console.
4Integrate Apify into your codebase
Finally, call the Linkedin Company Ads from your Python project. Use Apify Client or Endpoints. You’ll be able to export scraped Linkedin data in no time by running the sample code below ↓.
5Monitor your Linkedin Company Ads runs
Head over to our dashboard and see how Linkedin Company Ads runs are executed in real time. Here you can also download the run logs and keep an eye on the API’s performance.
Go to dashboardGet your Python project up and running
Add-on to step 4: start your Python project by executing this code snippet in your go-to environment.
1from apify_client import ApifyClient
2
3# Initialize the ApifyClient with your Apify API token
4# Replace '<YOUR_API_TOKEN>' with your token.
5client = ApifyClient("<YOUR_API_TOKEN>")
6
7# Prepare the Actor input
8run_input = {
9 "companies": [
10 "https://www.linkedin.com/company/financial-times/",
11 "https://www.linkedin.com/company/bloomberg-news/",
12 ],
13 "ad_limit": 15,
14 "ad_description": "yes",
15}
16
17# Run the Actor and wait for it to finish
18run = client.actor("saswave/linkedin-company-ads").call(run_input=run_input)
19
20# Fetch and print Actor results from the run's dataset (if there are any)
21print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
22for item in client.dataset(run["defaultDatasetId"]).iterate_items():
23 print(item)
24
25# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start
Enjoy $5 of free platform usage every month to explore and kickstart your projects.
Get started on Apify instantly without the hassle of entering your credit card information.
Join our Discord community to ask questions, share ideas, and connect with developers.
🔥 LinkedIn Jobs Scraper
bebity/linkedin-jobs-scraper
ℹ️ Designed for both personal and professional use, simply enter your desired job title and location to receive a tailored list of job opportunities. Try it today!
4.4k
96
Linkedin post scraper
curious_coder/linkedin-post-search-scraper
Scrape linkedin posts or updates from linkedin post search results. Supports advanced linkedin search filters. Extract posts from any linkedin member
2.9k
98
Linkedin Employees Scraper
caprolok/linkedin-employees-scraper
Effortlessly gather LinkedIn URLs and names of employees in bulk. Ideal for HR and recruitment, this tool quickly provides essential contact information, simplifying talent search and networking opportunities.
601
17
Ready to start downloading Linkedin comments?
You just need a free Apify account
Youtube Video Scraper by Hashtag
streamers/youtube-video-scraper-by-hashtag
Extract information about YouTube videos by specific hashtags. Get video URL, caption, timestamp, likes, dislikes, views and comments count, and basic channel info. You can download your data in JSON, CSV, Excel, and more.
63
3
YouTube Scraper
streamers/youtube-scraper
YouTube crawler and video scraper. Alternative YouTube API with no limits or quotas. Extract and download channel name, likes, number of views, and number of subscribers.
10.3k
241
Website Content Crawler
apify/website-content-crawler
Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗 LangChain, LlamaIndex, and the wider LLM ecosystem.
28.5k
719
Apify’s wide range of tools use a technique called web scraping to extract public data from websites. These scrapers access the website the same way as you would with a browser, find the image, video, or text you want, and download it for you. They’re a fast and efficient way to get data at scale.
Web scraping is a handy method for collecting information from various websites. It's like having a digital assistant that visits web pages on your behalf, pulling out the details you need such as prices, descriptions, addresses, and contact information. But it's more than just text; this tool can also download images and videos, making it a comprehensive way to gather content from the online world. It takes care of all the complex, technical parts, so you don't have to.
Web scraping is a method where you choose websites to collect specific content, including text, images, and videos. You begin by identifying the web pages that host the visual media you're interested in. Next, you use a web scraping tool tailored to locate the parts of the page containing the images or videos you want to download. Once the tool is set up and run, it navigates to the chosen web pages, identifies the images and videos, and downloads them for you. It's a streamlined way to gather pictures and videos from online sources without having to manually download each item.
Yes, web scraping is legal for gathering public information from websites. But be careful with personal or confidential data, as well as intellectual property, because laws and regulations might protect them. It's good practice to check the website's rules or terms of service to know what's allowed. If you're not sure, getting legal advice can help ensure you're using web scraping correctly and within the law.
Actors are serverless cloud programs that run on the Apify platform and do computing jobs. They’re called Actors because, like human actors, they perform actions based on a script. They can perform anything from simple actions (such as filling out a web form or sending an email) to complex operations (such as crawling an entire website or removing duplicates from a large dataset). Actor runs can be as short or as long as necessary. They could last seconds, hours, or even run infinitely.