Website Content to Markdown Scraper for LLM Training
Pay $4.99 for 1,000 results
Website Content to Markdown Scraper for LLM Training
Pay $4.99 for 1,000 results
π Transform web content into clean, LLM-ready Markdown! π Scrape multiple pages, extract main content, and convert to Markdown format. Perfect for AI researchers, data scientists, and LLM developers. Fast, efficient, and customizable. Supercharge your AI training data today! πππ§
π Website Content to Markdown Scraper for LLM Training
This powerful Apify Actor transforms web content into clean, readable Markdown format, perfect for training Large Language Models (LLMs). It's an essential tool for AI researchers, data scientists, and developers working on natural language processing tasks.
β¨ Features
- π Scrape content from multiple web pages
- π Convert HTML to clean Markdown format
- π§ Generate high-quality training data for LLMs
- π Intelligent main content extraction
- πΈοΈ Customizable crawling depth
- π Option to stay within the same domain
- π Fast and efficient with concurrent scraping
- π΅οΈββοΈ Stealth mode to avoid detection
π₯ Input
Configure your scraping job with these options:
startUrls
: List of URLs to start scraping frommaxDepth
: Maximum depth of links to follow (default: 1)sameDomain
: Whether to stay on the same domain while crawling (default: true)maxResults
: Maximum number of pages to scrape (default: 100)
π€ Output
For each scraped page, you'll get:
- π URL of the page
- π Page title
- π Main content in Markdown format, ideal for LLM training
π‘ Use Cases
- π€ LLM Training: Prepare web content as high-quality training data for language models
- π Content Aggregation: Collect articles and blog posts for research or curation
- π Web Analysis: Extract text content for sentiment analysis or topic modeling
- π Documentation: Convert web-based documentation into Markdown for easy integration
- π SEO Analysis: Extract and analyze content from competitor websites
π Getting Started
- Set your input parameters in the Apify console or via API
- Run the Actor and watch as it transforms web content into Markdown
- Access your results in JSON format, with Markdown content ready for LLM training or further processing
π Support
If you encounter any issues or have questions, please reach out through Apify's support channels.
Transform web content into clean, LLM-ready Markdown with just a few clicks! πππ§
Input Example
A full explanation of an input example in JSON.
1{ 2 "startUrls": ["https://apify.com"], 3 "maxDepth": 2, 4 "sameDomain": true, 5 "maxResults": 100, 6}
Output sample
The results will be wrapped into a dataset which you can always find in theΒ StorageΒ tab. Here's an excerpt from the data you'd get if you apply the input parameters above:
And here is the same data but in JSON. You can choose in which format to download your data: JSON, JSONL, Excel spreadsheet, HTML table, CSV, or XML.
1[ 2 { 3 "url": "https://apify.com", 4 "title": "Apify: Full-stack web scraping and data extraction platform", 5 "markdown": "powering the world's top data-driven teams\n\n#### \n\nSimplify scraping with\n\n![Crawlee](/img/icons/crawlee-mark.svg)Crawlee\n\nGive your crawlers an unfair advantage with Crawlee, our popular library for building reliable scrapers in Node.js.\n\n \n\nimport\n\n{\n\n \n\nPuppeteerCrawler,\n\n \n\nDataset\n\n}\n\n \n\nfrom 'crawlee';\n\nconst crawler = new PuppeteerCrawler(\n\n{\n\n \n\nasync requestHandler(\n\n{\n\n \n\nrequest, page,\n\n \n\nenqueueLinks\n\n}\n\n) \n\n{\n\nurl: request.url,\n\ntitle: await page.title(),\n\nawait enqueueLinks();\n\nawait crawler.run(\\['https://crawlee.dev'\\]);\n\n![Simplify scraping example](/img/homepage/develop_headstart.svg)\n\n#### Use your favorite libraries\n\nApify works great with both Python and JavaScript, with Playwright, Puppeteer, Selenium, Scrapy, or any other library.\n\n[Start with our code templates](/templates)\n\nfrom scrapy.spiders import CrawlSpider, Rule\n\nclass Scraper(CrawlSpider):\n\nname = \"scraper\"\n\nstart\\_urls = \\[\"https://the-coolest-store.com/\"\\]\n\ndef parse\\_item(self, response):\n\nitem = Item()\n\nitem\\[\"price\"\\] = response.css(\".price\\_color::text\").get()\n\nreturn item\n\n#### Turn your code into an Apify Actor\n\nActors are serverless microapps that are easy to develop, run, share, and integrate. The infra, proxies, and storages are ready to go.\n\n[Learn more about Actors](/actors)\n\nimport\n\n{ Actor\n\n}\n\n from 'apify'\n\nawait Actor.init();\n\n![Turn code into Actor example](/img/homepage/deploy_code.svg)\n\n#### Deploy to the cloud\n\nNo config required. Use a single CLI command or build directly from GitHub.\n\n[Deploy to Apify](https://console.apify.com/actors/new)\n\n\\> apify push\n\nInfo: Deploying Actor 'computer-scraper' to Apify.\n\nRun: Updated version 0.0 for scraper Actor.\n\nRun: Building Actor scraper\n\nACTOR: Pushing Docker image to repository.\n\nACTOR: Build finished.\n\nActor build detail -> https://console.apify.com/actors#/builds/0.0.2\n\nSuccess: Actor was deployed to Apify cloud and built there.\n\n![Deploy to cloud example](/img/homepage/deploy_cloud.svg)\n\n#### Run your Actors\n\nStart from Apify Console, CLI, via API, or schedule your Actor to start at any time. Itβs your call.\n\n POST/v2/acts/4cT0r1D/runs\n\nRun object\n\n {\n \"id\": \"seHnBnyCTfiEnXft\",\n \"startedAt\": \"2022-12-01T13:42:00.364Z\",\n \"finishedAt\": null,\n \"status\": \"RUNNING\",\n \"options\": {\n \"build\": \"version-3\",\n \"timeoutSecs\": 3600,\n \"memoryMbytes\": 4096\n },\n \"defaultKeyValueStoreId\": \"EiGjhZkqseHnBnyC\",\n \"defaultDatasetId\": \"vVh7jTthEiGjhZkq\",\n \"defaultRequestQueueId\": \"TfiEnXftvVh7jTth\"\n }\n\n![Run Actors example](/img/homepage/code_start.svg)\n\n#### Never get blocked\n\nUse our large pool of datacenter and residential proxies. Rely on smart IP address rotation with human-like browser fingerprints.\n\n[Learn more about Apify Proxy](/proxy)\n\nawait Actor.createProxyConfiguration(\n\n{\n\ncountryCode: 'US',\n\ngroups: \\['RESIDENTIAL'\\],\n\n![Never get blocked example](/img/homepage/code_blocked.svg)\n\n#### Store and share crawling results\n\nUse distributed queues of URLs to crawl. Store structured data or binary files. Export datasets in CSV, JSON, Excel or other formats.\n\n[Learn more about Apify Storage](/storage)\n\n GET/v2/datasets/d4T453t1D/items\n\nDataset items\n\n [\n {\n \"title\": \"myPhone 99 Super Max\",\n \"description\": \"Such phone, max 99, wow!\",\n \"price\": 999\n },\n {\n \"title\": \"myPad Hyper Thin\",\n \"description\": \"So thin it's 2D.\",\n \"price\": 1499\n }\n ]\n\n![Store example](/img/homepage/code_store.svg)\n\n#### Monitor performance over time\n\nInspect all Actor runs, their logs, and runtime costs. Listen to events and get custom automated alerts.\n\n![Performance tooltip](/img/homepage/performance-tooltip.svg)\n\n#### Integrations. Everywhere.\n\nConnect to hundreds of apps right away using ready-made integrations, or set up your own with webhooks and our API.\n\n[See all integrations](/integrations)\n\n[\n\nCrawls arbitrary websites using the Chrome browser and extracts data from pages using JavaScript code. The Actor supports both recursive crawling and lists of URLs and automatically manages concurrency for maximum performance. This is Apify's basic tool for web crawling and scraping.\n\n](/apify/web-scraper)[\n\nExtract data from hundreds of Google Maps locations and businesses. Get Google Maps data including reviews, images, contact info, opening hours, location, popular times, prices & more. Export scraped data, run the scraper via API, schedule and monitor runs, or integrate with other tools.\n\n](/compass/crawler-google-places)[\n\nCrawls websites using raw HTTP requests, parses the HTML with the Cheerio library, and extracts data from the pages using a Node.js code. Supports both recursive crawling and lists of URLs. This actor is a high-performance alternative to apify/web-scraper for websites that do not require JavaScript.\n\n](/apify/cheerio-scraper)[\n\nYouTube crawler and video scraper. Alternative YouTube API with no limits or quotas. Extract and download channel name, likes, number of views, and number of subscribers.\n\n](/streamers/youtube-scraper)[\n\nCrawls websites with the headless Chrome and Puppeteer library using a provided server-side Node.js code. This crawler is an alternative to apify/web-scraper that gives you finer control over the process. Supports both recursive crawling and list of URLs. Supports login to website.\n\n](/apify/puppeteer-scraper)[\n\nScrape Booking with this hotels scraper and get data about accommodation on Booking.com. You can crawl by keywords or URLs for hotel prices, ratings, addresses, number of reviews, stars. You can also download all that room and hotel data from Booking.com with a few clicks: CSV, JSON, HTML, and Excel\n\n](/voyager/booking-scraper)[\n\nUse this Amazon scraper to collect data based on URL and country from the Amazon website. Extract product information without using the Amazon API, including reviews, prices, descriptions, and Amazon Standard Identification Numbers (ASINs). Download data in various structured formats.\n\n](/junglee/Amazon-crawler)[\n\nScrape tweets from any Twitter user profile. Top Twitter API alternative to scrape Twitter hashtags, threads, replies, followers, images, videos, statistics, and Twitter history. Export scraped data, run the scraper via API, schedule and monitor runs or integrate with other tools.\n\n](/quacker/twitter-scraper)\n\n[Browse 2,000+ Actors](/store)" 6 }, 7 { 8 "url": "https://apify.com/actors", 9 "title": "Actors - fast and easy scraping in the cloud Β· Apify", 10 "markdown": "Actors are serverless cloud programs that run on the Apify platform and do computing jobs. They are called Actors because, like human actors, they perform actions based on a script.\n\n![](https://cdn-cms.apify.com/actor_with_border_f3508ec394.svg)\n\n### Long-running serverless jobs[](#long-running-serverless-jobs)\n\nApify Actors can perform time-consuming jobs that are longer than the lifespan of a single HTTP transaction.\n\n![](https://cdn-cms.apify.com/Serverless_jobs_54794c9759.svg)\n\n### Publish your Actor[](#publish-your-actor)\n\nJoin hundreds of developers who share their Actors on Apify Store and earn money from coding.\n\n[Go to Apify Store](/store)\n\n![](https://cdn-cms.apify.com/Publish_your_Actor_8e239d4ed0.svg)\n\n### Auto-generated user interface[](#auto-generated-user-interface)\n\nActors can easily define a user interface for their input configuration. Take advantage of lower-level features and settings, or run Actors using our API.\n\n[Learn about Input Schema](https://docs.apify.com/academy/deploying-your-code/input-schema)\n\n![](https://cdn-cms.apify.com/Auto_generated_user_interface_7019512533.svg)\n\n![](https://cdn-cms.apify.com/GH_3bc8f59fdc.svg)\n\nHost code anywhere\n\nEdit your code on our platform, fetch from a Git repository, or push from your machine.\n\n![](https://cdn-cms.apify.com/Docker_support_cf77c5d57b.svg)\n\nDocker support\n\nActors run inside Docker containers on Apify servers. Use a custom Dockerfile.\n\n![](https://cdn-cms.apify.com/Ready_for_scale_925788ef57.svg)\n\nReady for scale\n\nRun as many Actors as you need. The Apify platform provisions the necessary resources.\n\n![](https://cdn-cms.apify.com/Custom_memory_and_CPU_178b540e7d.svg)\n\nCustom memory and CPU\n\nAssign each Actor any RAM volume needed. CPU share is allocated automatically.\n\n![](https://cdn-cms.apify.com/Command_line_tool_4d7d12cd5e.svg)\n\nCommand-line tool\n\nDevelop and test your Actors locally, push them to the Apify platform when you're ready.\n\n![](https://cdn-cms.apify.com/Logging_2034cc75a0.svg)\n\nLogging\n\nView and download logs to debug your code and monitor performance on production.\n\n![](https://cdn-cms.apify.com/Full_support_for_Scrapy_2_ceae73e8b1.png)\n\nActorize your Scrapy spiders[](#actorize-your-scrapy-spiders)\n-------------------------------------------------------------\n\nDeploy your Scrapy code to the cloud with just a few commands. Turn your Scrapy projects into Actors, run, schedule, monitor and monetize them.\n\n[Learn more](/run-scrapy-in-cloud)" 11 }, 12 ... 13]
- 17 monthly users
- 0 stars
- 100.0% runs succeeded
- Created in Oct 2024
- Modified 16 days ago