Pinecone GPT Chatbot avatar
Pinecone GPT Chatbot
Try for free

No credit card required

View all Actors
Pinecone GPT Chatbot

Pinecone GPT Chatbot

tri_angle/pinecone-gpt-chatbot
Try for free

No credit card required

Pinecone GPT Chatbot combines OpenAI's GPT models with Pinecone's database to generate insightful responses. Its interactive chatbot interface presents precise and comprehensive answers to user queries. Benefit from semantic understanding, efficient workflows, and enriched knowledge integration!

Pinecone GPT Chatbot is an Apify Actor designed to seamlessly integrate OpenAI's GPT models with the Pinecone database. Featuring an interactive chatbot application, it enables users to effortlessly generate rich and insightful responses to queries.

Why Use Pinecone GPT Chatbot?

  • Seamless Integration: Pinecone GPT Chatbot effortlessly connects OpenAI's GPT models with your Pinecone index, ensuring that GPT model has access to the latest information.
  • Rich Responses: Pinecone GPT Chatbot navigates through your Pinecone documents, connects the knowledge together and presents insights with precision.
  • Efficient Workflows: Pair Pinecone GPT Chatbot with the WCC Pinecone Integration Actor to optimize your data pipeline. WCC Pinecone Integration gathers data with the help of Website Content Crawler and then it stores the extracted data in the Pinecone vector database to the specified index. You can then seamlessly query the Pinecone index with our Pinecone GPT Chatbot, enhancing productivity and insights.

Usage

Input

The Actor takes the following input fields in the JSON format:

Input fieldDescription
🔑 OpenAI API keyAPI key for connecting to OpenAI. Available at: https://platform.openai.com/api-keys
🔑 Pinecone API keyAPI key for connecting to an existing Pinecone index. Available at: https://app.pinecone.io/
🔖 Pinecone index nameThe name of the Pinecone index where the relevant vectors are stored.
🤖 GPT modelGPT-4 Turbo or GPT-3.5 Turbo.
🌡️ TemperatureThe temperature for the GPT model that controls the randomness. Accepts the values 0-1.
📊 Top K resultsTop K results to fetch from the Pinecone index and use as a context for the GPT model

Example Input

1{
2  "openaiApiKey": "YOUR_OPENAI_API_KEY",
3  "pineconeApiKey": "YOUR_PINECONE_API_KEY",
4  "pineconeIndexName": "apify_docs",
5  "gptModel": "gpt-4-turbo",
6  "temperature": "0.5",
7  "topKResults": 10
8}

How It Works

  1. The Actor deploys an interactive chatbot application to a dedicated server. You can find the link to the application in the Actor's log, e.g.:

    Chatbot application is running on https://mckifnthonjq.runs.apify.net

    You can then interact with the application by providing questions as prompts:Apify Chatbot

  2. The Actor connects to an existing Pinecone index using the provided Pinecone API key and index name.

  3. It retrieves relevant vectors from the Pinecone index based on your queries.

  4. The Actor integrates the vector data with the selected GPT model (GPT-4 Turbo or GPT-3.5 Turbo).

  5. The GPT model uses the conversation history and context vectors to generate responses to your queries.

  6. Conversation history, along with source documents used for answering questions, is stored in the Actor's dataset.

NOTE: When the Actor run times out or is manually aborted, the chatbot application stops working. Each Actor run starts a new conversation with no context from the previous conversations.

Output

The Actor stores conversation history in a dataset. It also includes source documents that were used for answering your questions so that you could easily validate the correctness of GPT model's answers. Each dataset item contains the following fields:

  • question: Your query.
  • answer: The generated response from the GPT model.
  • sourceDocuments: An array of objects containing relevant page content and corresponding page URL.

Example Output

1[
2  {
3    "question": "Why should I use Apify?",
4    "answer": "You should consider using Apify for several reasons, particularly if you are involved in web scraping, automation, or data processing tasks:\n\n1. **Scalability**: Apify's platform is designed to handle large-scale and high-performance tasks. It allows you to easily crawl websites with millions of pages or perform complex computing jobs.\n\n2. **Serverless Cloud Programs (Actors)**: Apify provides serverless computing where you can run cloud programs, called Actors, without having to manage servers. This can significantly reduce the overhead and complexity of deploying and managing infrastructure.\n\n3. **Automation and Integration**: Apify can automate web-based tasks that you would normally do manually in a browser. It also offers built-in integrations with other services and platforms, enhancing your ability to connect and automate across various cloud services or web applications.\n\n4. **Ease of Use**: The platform provides tools like the Apify SDK, CLI, and a web console, making it accessible for developers to create, run, and manage their tasks. For web scraping specifically, the combination of Apify SDK v3 and Crawlee simplifies the development process.\n\n5. **Comprehensive API and Webhooks**: Through its RESTful API and HTTP webhooks, Apify allows programmatic interaction with its platform, enabling you to start actors, retrieve results, and integrate these capabilities into your own applications or workflows.\n\n6. **Community and Support**: Apify has a growing community and provides robust support. This can be invaluable when developing complex projects or when you need specific guidance.\n\n7. **Versatility**: Whether you're scraping data, automating tasks, or integrating disparate systems, Apify provides a versatile platform that can be adapted to a wide range of use cases.\n\nIn summary, if your projects involve web scraping, automation, or require integration with other web services, Apify offers a powerful, scalable, and user-friendly platform that can significantly enhance your productivity and capabilities.",
5    "sourceDocuments": [
6      {
7        "pageContent": "Apify is a cloud platform that helps you build reliable web scrapers, fast, and automate anything you can do manually in a web browser.\nActors are serverless cloud programs running on the Apify platform that can easily crawl websites with millions of pages, but also perform arbitrary computing jobs such as sending emails or data transformations. They can be started manually, using our API or scheduler, and they can be easily integrated with other apps.\nLearn how to run any Actor in Apify Store or create your own. A step-by-step guides through your first steps on the Apify platform.",
8        "metadata": {
9          "url": "https://docs.apify.com/platform"
10        }
11      },
12      {
13        "pageContent": "Learn all about the Apify platform, all of the tools it offers, and how it can improve your overall development experience.\nThe Apify platform was built to serve large-scale and high-performance web scraping and automation needs. It provides easy access to compute instances (actors), convenient request and result storages, proxies, scheduling, webhooks and more - all accessible through the Console web interface, Apify's API, or our JavaScript and Python API clients.\nCategory outline​\nIn this category, you'll learn how to become an Apify platform developer from the ground up. From creating your first account, to developing actors, this is your one-stop-shop for understanding how the platform works, and how to work with it.\nFirst up​\nWe'll start off this category light, by showing you how to create an Apify account and get everything ready for development with the platform. Let's go!",
14        "metadata": {
15          "url": "https://docs.apify.com/academy/apify-platform"
16        }
17      },
18      {
19        "pageContent": "Integrations\nLearn how to integrate the Apify platform with other services, your systems, data pipelines, and other web automation workflows.\nThe whole is greater than the sum of its parts.\n👴 Aristotle\nIntegrations allow you to combine separate applications and take advantage of their combined capabilities. Automation of these online processes increases your productivity. That's why we made Apify in a way that allows you to connect it with practically any cloud service or web app and make it part of your larger projects.\nIf you are building a service and your users could benefit from integrating with Apify or wise-versa then ready the integrate with Apify.\nBuilt-in integrations​\nApify‘s RESTful API allows you to interact with the platform programmatically. HTTP webhooks notify you and your services when important events happen. By using the API, you can start Actors, retrieve their results, or basically do anything you can do on a platform UI\nApify offers easy-to-set-up solutions for common scenarios, like uploading your datasets to Google Drive when the run succeeds or creating an issue on GitHub when it fails.\nAutomation platforms​\nIf you use one of the main automation platforms, Apify support is here for you. The main advantage of these platforms is that you can integrate Apify into very complex workflows with the choice of thousands of supported services.\nThe Apify platform integrates with popular ETL and data pipeline services, enabling you to integrate Apify Actors directly into your data integration processes.\nIf you are working on an AI/LLM-related project, we recommend you look into the Langchain integration.\nOther Actors​\nExplore Apify Store for Actors that may help you with integrations, for example, MongoDB Import or MySQL Insert.",
20        "metadata": {
21          "url": "https://docs.apify.com/platform/integrations"
22        }
23      },
24      {
25        "pageContent": "Apify actors are cloud programs that can perform arbitrary web scraping, automation, or data processing job. They accept input, perform their job and generate output.\nWhile you can develop actors in an online IDE in Apify Console, for larger projects, it is more convenient to develop actors locally on your computer using Apify SDK and only push the actors to the Apify platform during deployment. This is where the Apify CLI comes in to allow you to quickly develop locally and then deploy to the cloud with a single command.\n# Create your first actor\napify create my-actor\n\n# Go into the project directory\ncd my-actor\n\n# Run it locally\napify run\n\n# Log into your Apify account and deploy it to Apify Platform\napify login\napify push",
26        "metadata": {
27          "url": "https://docs.apify.com/cli/"
28        }
29      },
30      {
31        "pageContent": "Apify SDK v3 is out 🚀\nWhat's new? Read below 👇\nFour years ago, Apify released its open-source Node.js library for web scraping and automation, Apify SDK. It became popular among the community, but there was a problem. Despite being open-source, the library's name caused users to think its features were restricted to the Apify platform, which was never the case.\nWith this in mind, we decided to split Apify SDK into two libraries, Crawlee and Apify SDK v3. Crawlee will retain all the crawling and scraping-related tools and will always strive to be the best web scraping library for its community. At the same time, Apify SDK will continue to exist, but keep only the Apify-specific features related to building actors on the Apify platform.\nHow it works now\nOutside of the Apify platform\nIf you want to use the crawling functionality of Apify SDK v2 outside of the Apify platform, head to Crawlee documentation to get started. The interface is almost exactly the same as the original SDK, but we've made a lot of improvements under the hood to improve the developer experience.\nOn the Apify platform\nIn Apify SDK v2, both the crawling and actor building logic were mixed together. This made it easy to build crawlers on the Apify platform, but confusing to build anything else. Apify SDK v3 includes only the Apify platform specific functionality. To build crawlers on the Apify platform, you need to combine it with Crawlee. Or you can use it standalone for other projects.\nBuild a crawler like you're used to\nThe following example shows how to build an SDK-v2-like crawler on the Apify platform. To use PlaywrightCrawler you need to install 3 libraries. Apify SDK v3, Crawlee and Playwright. In v2, you only needed to install Apify SDK v2 and Playwright.\nnpm install apify crawlee playwright\nDon't forget about module imports",
32        "metadata": {
33          "url": "https://docs.apify.com/sdk/js/"
34        }
35      }
36    ]
37  }
38]
Developer
Maintained by Apify
Actor metrics
  • 1 monthly user
  • 0 stars
  • Created in May 2024
  • Modified 22 minutes ago