
Ubereats Stores Search By Location And Keyword
Pricing
$16.99/month + usage

Ubereats Stores Search By Location And Keyword
Experience instant restaurant insights with Uber Eats Discovery by Location and Keyword Scraper. Enter a search and location to get real-time data—no URLs needed. Perfect for quick market research and competitor analysis, all in one efficient, easy-to-use tool.
0.0 (0)
Pricing
$16.99/month + usage
0
Total users
1
Monthly users
1
Last modified
3 days ago
🌎 UberEats Search Scraper by Location and Keyword [RENTAL]
Discover a smarter way to access restaurant data with Uber Eats Discovery by Location and Keyword Scraper. Skip the hassle of copying URLs—just enter a search term and location to unlock real-time restaurant insights in seconds. Ideal for instant market research or competitive scouting, all in a single streamlined tool. Efficient, effortless, and designed for fast results.
✨ What does this scraper do?
- Searches for stores/restaurants on Uber Eats according to one or more locations and a search keyword.
- Extracts relevant information from each store, such as name, address, promotions, city, country, and more.
- Allows you to limit the maximum number of results per search.
- Returns the data in a structured (JSON) format, ready for analysis or integration.
🔍 How does it work?
-
Input:
The actor receives an input with:- A list of addresses or locations.
- A search keyword (e.g., "pizza", "sushi", etc.).
- A maximum number of results per location.
-
Process:
For each location:- Navigates to Uber Eats and simulates a search as a real user.
- Extracts data from the stores matching the search term.
- Repeats the process for all specified locations.
-
Output:
- A JSON file containing the data of the discovered stores, including details like name, address, city, country, promotions, etc.
Example input
{"locations": ["95th Ave, Ozone Park, NY 11416, USA",],"search_keyword": "pizza","max_search_results": 20 // Each location will have a maximum of 20 results, If there is less than that fewer results will be returned}
Example output
[{"store_uuid": "35e7a890-3f2a-5948-815e-7a2a3cb77d1f","name": "Pizza Near Me","url": "https://www.ubereats.com/store/pizza-near-me/NeeokD8qWUiBXnoqPLd9Hw","estimated_time_to_delivery": "20 min","rating": 4.526315789473685,"rating_count": "14","images": ["https://tb-static.uber.com/prod/image-proc/processed_images/8018381e4969565eb152608e0b9d106e/fb86662148be855d931b37d6c1e5fcbe.webp","https://tb-static.uber.com/prod/image-proc/processed_images/8018381e4969565eb152608e0b9d106e/783282f6131ef2258e5bcd87c46aa87e.webp","https://tb-static.uber.com/prod/image-proc/processed_images/8018381e4969565eb152608e0b9d106e/8a42ee7a692dfa4155879820804a277f.webp","https://tb-static.uber.com/prod/image-proc/processed_images/8018381e4969565eb152608e0b9d106e/fdf52d66534809b650058f41d517d74a.webp","https://tb-static.uber.com/prod/image-proc/processed_images/8018381e4969565eb152608e0b9d106e/9b3aae4cf90f897799a5ed357d60e09d.webp","https://tb-static.uber.com/prod/image-proc/processed_images/8018381e4969565eb152608e0b9d106e/f6deb0afc24fee6f4bd31a35e6bcbd47.webp"],"city": "Newark","country_code": "US","extraction_date": "2025-07-18","extraction_datetime": "2025-07-18T05:44:25Z","input": {"search_location": "85 Marsh St, Newark, NJ 07114, USA"}}]
📚 How to use this actor
-
In Apify Console:
- Click "Run" and provide the input in JSON format as shown above.
-
From code (Python/Node.js):
You can run the actor using the Apify SDK or by making an HTTP request to the Apify API.
🗒️ Technical notes
- The scraper uses asynchronous HTTP requests for efficiency.
- Retry mechanisms and cookie handling are implemented to simulate real user behavior.
- The code is prepared to handle pagination and large result sets.
⚠️ Limitations
- Scraping Uber Eats may be subject to changes in the website structure or anti-bot measures.
- Using proxies may be necessary for large volumes or to avoid blocking.
💬 Support
- If you need any help please contact us as soon as possible via email or any contact channel that Apify provides
Credits
Developed and maintained by ScraperUnit team 🌟.