Long-term Memory for AI. The Pinecone vector database makes it easy to build high-performance vector search applications. Developer-friendly, fully managed, and easily scalable without infrastructure hassles.
Tool to cancel a bulk import operation in Pinecone. Use when you need to stop an ongoing import operation that is not yet finished.
Tool to chat with a Pinecone assistant and get structured responses with citations. Use when you need to query an assistant that has access to your knowledge base and want to get back answers with document references and citations.
Tool to chat with a Pinecone assistant through an OpenAI-compatible interface. Use when you need to interact with a Pinecone assistant that has access to indexed documents and can answer questions based on retrieved context.
Tool to configure an existing Pinecone index, including pod type, replicas, deletion protection, and tags. Use when you need to scale an index vertically or horizontally, enable/disable deletion protection, or update tags. The change is asynchronous; check index status for completion.
Tool to create a new Pinecone assistant for RAG (Retrieval-Augmented Generation) applications. Use when you need to initialize a new assistant that can have files uploaded and support chat interactions.
Tool to create a backup of a Pinecone index for disaster recovery and version control. Use when you need to preserve the current state of an index including vectors, metadata, and configuration.