Uber Eats API Direct No HTML Scraper
Pricing
$0.005 / actor start
Uber Eats API Direct No HTML Scraper
Uses Google API to retrieve and calculate "place". This Actor sends headers, payloads and cookies and retrieves massive JSON files from two Uber Eats internal API endpoints for overall store information then each store. Check run-time log to see real-time summary. RSS feed available for alerts.
Pricing
$0.005 / actor start
Rating
0.0
(0)
Developer

Amanda Dalziel
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
3 days ago
Last modified
Categories
Share
##Uber Eats API Direct No HTML Scraper Read me
I use Google API to retrieve and calculate the "place" variable used by Uber Eats. This API costs real money, so I use a cache for the retreived data.
This Actor sends headers, payloads and cookies and retrieves massive JSON files from two Uber Eats internal API endpoints.
The first JSON file has overall information about available stores. url = f"{BASE_URL}/_p/api/getFeedV1?localeCode={cc}&pl={pl}"
The second set of JSON files needs to be called on each individual store (sent via headers, payloads and cookies) url = f"{BASE_URL}/_p/api/getStoreV1?localeCode={cc}&pl={pl}"
Check your run-time log to see a real-time summary.
From the information retreived (and returned in the result per store), I can determine whether a store is
- open
- closed
- turned off!
Sometimes a store can be too busy to handle the Uber Eats tablet and incoming orders as well.
From each store I can retrieve the hours, but more importantly, if a store is delivered by Uber Eats or Restuarant Staff.
I save all this with an epoch timestamp to refer back to the next time the script is run to determine what has changed.
I have this running locally on a cron job for Merimbula NSW.
The script also generates an dataset of alerts based on location which can be used with the RSS feed feature to retreive alerts.
This website demonstates how I am using the scraped data: https://coffeecakecomputers.com.au/ubereats/ai.php
Start a new web scraping project quickly and easily in Python with our empty project template. It provides a basic structure for the Actor with Apify SDK and allows you to easily add your own functionality.
Included features
- Apify SDK for Python - a toolkit for building Apify Actors and scrapers in Python
- Input schema - define and easily validate a schema for your Actor's input
- Request queue - queues into which you can put the URLs you want to scrape
- Dataset - store structured data where each object stored has the same attributes
How it works
Insert your own code to async with Actor: block. You can use the Apify SDK with any other Python library.
Resources
- Python tutorials in Academy
- Video guide on getting data using Apify API
- Integration with Make, GitHub, Zapier, Google Drive, and other apps
- A short guide on how to build web scrapers using code templates:
Getting started
For complete information see this article. In short, you will:
- Build the Actor
- Run the Actor
Pull the Actor for local development
If you would like to develop locally, you can pull the existing Actor from Apify console using Apify CLI:
-
Install
apify-cliUsing Homebrew
$brew install apify-cliUsing NPM
$npm -g install apify-cli -
Pull the Actor by its unique
<ActorId>, which is one of the following:- unique name of the Actor to pull (e.g. "apify/hello-world")
- or ID of the Actor to pull (e.g. "E2jjCZBezvAZnX8Rb")
You can find both by clicking on the Actor title at the top of the page, which will open a modal containing both Actor unique name and Actor ID.
This command will copy the Actor into the current directory on your local machine.
$apify pull <ActorId>
Documentation reference
To learn more about Apify and Actors, take a look at the following resources: