
Pinecone Integration
Pricing
Pay per usage

Pinecone Integration
This integration transfers data from Apify Actors to a Pinecone and is a good starting point for a question-answering, search, or RAG use case.
4.6 (5)
Pricing
Pay per usage
30
Monthly users
92
Runs succeeded
>99%
Response time
4 days
Last modified
3 days ago
Pinecone index name
pineconeIndexName
stringRequired
Name of the Pinecone index where the data will be stored
Pinecone index namespace
pineconeIndexNamespace
stringOptional
Name of the Pinecone index namespace (partition the records in an index)
Embeddings provider (as defined in the langchain API)
embeddingsProvider
EnumRequired
Choose the embeddings provider to use for generating embeddings
Value options:
"OpenAI": string"Cohere": string
Default value of this property is "OpenAI"
Configuration for embeddings provider
embeddingsConfig
objectOptional
Configure the parameters for the LangChain embedding class. Key points to consider:
-
Typically, you only need to specify the model name. For example, for OpenAI, set the model name as {"model": "text-embedding-3-small"}.
-
It's required to ensure that the vector size of your embeddings matches the size of embeddings in the database.
-
Here are examples of embedding models:
-
For more details about other parameters, refer to the LangChain documentation.
Embeddings API KEY (whenever applicable, depends on provider)
embeddingsApiKey
stringRequired
Value of the API KEY for the embeddings provider (if required).
For example for OpenAI it is OPENAI_API_KEY, for Cohere it is COHERE_API_KEY)
Dataset fields to select from the dataset results and store in the database
datasetFields
arrayRequired
This array specifies the dataset fields to be selected and stored in the vector store. Only the fields listed here will be included in the vector store.
For instance, when using the Website Content Crawler, you might choose to include fields such as text
, url
, and metadata.title
in the vector store.
Default value of this property is ["text"]
Dataset fields to select from the dataset and store as metadata in the database
metadataDatasetFields
objectOptional
A list of dataset fields which should be selected from the dataset and stored as metadata in the vector stores.
For example, when using the Website Content Crawler, you might want to store url
in metadata. In this case, use metadataDatasetFields parameter as follows {"url": "url"}
Custom object to be stored as metadata in the vector store database
metadataObject
objectOptional
This object allows you to store custom metadata for every item in the vector store.
For example, if you want to store the domain
as metadata, use the metadataObject
like this: {"domain": "apify.com"}.
Update strategy (add, upsert, deltaUpdates (default))
dataUpdatesStrategy
EnumOptional
Choose the update strategy for the integration. The update strategy determines how the integration updates the data in the database.
The available options are:
-
Add data (
add
):- Always adds new records to the database.
- No checks for existing records or updates are performed.
- Useful when appending data without concern for duplicates.
-
Upsert data (
upsert
):- Updates existing records if they match a key or identifier.
- Inserts new records into the database if they don't already exist.
- Ideal for ensuring the database contains the most up-to-date data, avoiding duplicates.
-
Update changed data based on deltas (
deltaUpdates
):- Performs incremental updates by identifying differences (deltas) between the new dataset and the existing records.
- Only adds new records and updates those that have changed.
- Unchanged records are left untouched.
- Maximizes efficiency by reducing unnecessary updates.
Select the strategy that best fits your use case.
Value options:
"add": string"upsert": string"deltaUpdates": string
Default value of this property is "deltaUpdates"
Dataset fields to uniquely identify dataset items (only relevant when dataUpdatesStrategy is `upsert` or `deltaUpdates`)
dataUpdatesPrimaryDatasetFields
arrayOptional
This array contains fields that are used to uniquely identify dataset items, which helps to handle content changes across different runs.
For instance, in a web content crawling scenario, the url
field could serve as a unique identifier for each item.
Default value of this property is ["url"]
Enable incremental updates for objects based on deltas (deprecated)
enableDeltaUpdates
booleanOptional
When set to true, this setting enables incremental updates for objects in the database by comparing the changes (deltas) between the crawled dataset items and the existing objects, uniquely identified by the datasetKeysToItemId
field.
The integration will only add new objects and update those that have changed, reducing unnecessary updates. The datasetFields
, metadataDatasetFields
, and metadataObject
fields are used to determine the changes.
Default value of this property is true
Dataset fields to uniquely identify dataset items (only relevant when `enableDeltaUpdates` is enabled) (deprecated)
deltaUpdatesPrimaryDatasetFields
arrayOptional
This array contains fields that are used to uniquely identify dataset items, which helps to handle content changes across different runs.
For instance, in a web content crawling scenario, the url
field could serve as a unique identifier for each item.
Default value of this property is ["url"]
Delete expired objects from the database
deleteExpiredObjects
booleanOptional
When set to true, delete objects from the database that have not been crawled for a specified period.
Default value of this property is true
Delete expired objects from the database after a specified number of days
expiredObjectDeletionPeriodDays
integerOptional
This setting allows the integration to manage the deletion of objects from the database that have not been crawled for a specified period. It is typically used in subsequent runs after the initial crawl.
When the value is greater than 0, the integration checks if objects have been seen within the last X days (determined by the expiration period). If the objects are expired, they are deleted from the database. The specific value for deletedExpiredObjectsDays
depends on your use case and how frequently you crawl data.
For example, if you crawl data daily, you can set deletedExpiredObjectsDays
to 7 days. If you crawl data weekly, you can set deletedExpiredObjectsDays
to 30 days.
Default value of this property is 30
Enable text chunking
performChunking
booleanOptional
When set to true, the text will be divided into smaller chunks based on the settings provided below. Proper chunking helps optimize retrieval and ensures accurate and efficient responses.
Default value of this property is true
Maximum chunk size
chunkSize
integerOptional
Defines the maximum number of characters in each text chunk. Choosing the right size balances between detailed context and system performance. Optimal sizes ensure high relevancy and minimal response time.
Default value of this property is 2000
Chunk overlap
chunkOverlap
integerOptional
Specifies the number of overlapping characters between consecutive text chunks. Adjusting this helps maintain context across chunks, which is crucial for accuracy in retrieval-augmented generation systems.
Default value of this property is 0
Use Pinecone ID prefix
usePineconeIdPrefix
booleanOptional
When set to true, this option will use Pinecone ID prefix instead of metadata for handling deltaUpdates. It will create a prefix in the database using the following format: item_id#chunk_id
. This will results in more efficient updates
Default value of this property is false
Batch size to use when embedding the texts
embeddingBatchSize
integerOptional
The number of texts to embed in a single batch. This setting can be used to optimize the performance when embedding texts. If you receive Embedding provider errors, you need to decrease this size.
Default value of this property is 1000
Pricing
Pricing model
Pay per usageThis Actor is paid per platform usage. The Actor is free to use, and you only pay for the Apify platform usage.