
🎧 Spotify Tracks/Songs Scraper
Pricing
$5.00/month + usage

🎧 Spotify Tracks/Songs Scraper
🎤 Unlock the power of Spotify 🎧 with our Spotify Tracks Scraper! 🚀 Easily extract detailed track info 📊 from Spotify using keywords 🔍 or direct track URLs 🌐. Get rich data like track titles 🎵, artists 👩🎤, release dates 📅, duration ⏱️, popularity 🌟, available markets 🌍, and more!
0.0 (0)
Pricing
$5.00/month + usage
1
Total users
9
Monthly users
8
Runs succeeded
>99%
Issue response
5.1 hours
Last modified
4 days ago
Probable blocking of ector by Spotify
Closed
Hello :)
Today I tried to use your ector, but it seems I triggered some kind of malfunction. I was attempting to extract data for 22,869 Spotify song URLs, but after downloading 1,668 URLs, the process stopped https://console.apify.com/actors/eCKKhLQE8vMrE0f9M/runs/oaYhpSYBnr4599Nm2#output%20
Later, I tried again, but after 20 minutes and 37 seconds of ector’s work, it stopped without any output. https://console.apify.com/actors/eCKKhLQE8vMrE0f9M/runs/ss5TDIugIvzlpgrx9#output
I even tried one more time from another Apify account, I tried to fetch information using just a test URL, but ector wasn’t working. It looks like Spotify may have blocked ector
I apologize for any inconvenience caused and thank you in advance

Scrape Architect (ScrapeArchitect)
Ohh, sorry Spotify official API only Accept 100+ per 30 second or 1000+ Request per hour. It's out of my hands it's depends on Spotify official API.
Now i have tried and it's working. Spotify API daily request reset for now, so you can try.
1st try With Total 1000+ Urls+keywords if work then again Request another 1000+ With break 1 minute. It will work that Way. Don't give more than 1000+ URLs and break 1 minute before requesting with another 1000+ Urls. And if it not work than try with 100+ it work. If still not work it's mean API limit reached so try again tomorrow or after few hours but don't Input over 1000 URLs+Keywords.
Thanks a a lot for opening issue
alionakushnareva
Hello!
Thank you very much for quick reply and for your explanation regarding the Spotify API limits! (good to know that it exists!) I also want to thank you for creating this scraper! 😊 I am currently using it to build a dataset for a learning project
After your message, I contacted Apify engineers They replied that changes should be made in the actor’s code I’m sharing their feedback below, I hope it might be helpful in bypassing the request limit: "When facing a similar problem, we usually add artificial throttling directly into the actor – the developer should implement it, because manually imitating the delay would be unnecessarily complicated."
"If they are not using any token, they can just rotate proxies, which is done in Crawlee templates. https://apify.com/templates https://apify.com/templatesIf they use some token, that means the rate limit is based on that. In that case, the easiest is to again use the Crawlee template and set maxRequestsPerMinute https://crawlee.dev/js/docs/guides/scaling-crawlers#maxrequestsperminute https://crawlee.dev/js/docs/guides/scaling-crawlers#maxrequestsperminute"
Thanks again for your great work and for your time!

Scrape Architect (ScrapeArchitect)
Thank you so much for your kind words and helpful feedback! 😊 I'm really glad to hear you're using the scraper for your learning project, and I appreciate you reaching out to Apify engineers for additional guidance. I'll definitely look into adding throttling and maxRequestsPerMinute to improve rate limit handling. Thanks again for your support and suggestions—it really motivates me to keep improving the actor! 🙌
alionakushnareva
Hi again!
Thanks so much for adding warnings to the actor, really appreciate it 😊 And sorry but I’ve got a bit more feedback after testing it some more 😅
I ran into rate limit twice over the past few days and got temporarily blocked When I submit 1000 urls, it processes 30 urls, then skips 100 seconds, then another 30, and so on I tried sending smaller batches of urls but fewer I sent fewer results I got back
In total, I’ve only managed to scrape around 2000 out of 22000 links, getting back batches of 20–300 urls at a time It’s been slow and a bit frustrating, since 1000 requests/hour limit counts all requests, not just successful ones So even with partial results, I’d hit the limit and get blocked
My current workflow looks like: Send 1000 links → get 200–300 results → wait 1+ hour → repeat 🥲 But over time, the number of results per batch drops — from 300 to 250, then 100, then just 50 or so All of this has made the process quite time consuming and I ended up narrowing the scope of my project to focus only on key tracks
Just wanted to share this in case it’s helpful for future improvements Thanks again for your awesome work on the actor! 😊

Scrape Architect (ScrapeArchitect)
Thanks a lot i will try my best ☺️