POST /bulk/imports

Start an asynchronous import of vectors from object storage into an index.

For guidance and examples, see Import data.

Servers

Request headers

Name Type Required Description
Content-Type String Yes The media type of the request body.

Default value: "application/json"

X-Pinecone-Api-Version String Yes

Required date-based version header

Default value: "2025-10"

Request body fields

Name Type Required Description
uri String Yes

The URI of the bucket (or container) and import directory containing the namespaces and Parquet files you want to import. For example, s3://BUCKET_NAME/IMPORT_DIR for Amazon S3, gs://BUCKET_NAME/IMPORT_DIR for Google Cloud Storage, or https://STORAGE_ACCOUNT.blob.core.windows.net/CONTAINER_NAME/IMPORT_DIR for Azure Blob Storage. For more information, see Import records.

errorMode Object No

Indicates how to respond to errors during the import process.

errorMode.onError String No

Indicates how to respond to errors during the import process. Possible values: abort or continue.

integrationId String No

The id of the storage integration that should be used to access the data.

How to start integrating

  1. Add HTTP Task to your workflow definition.
  2. Search for the API you want to integrate with and click on the name.
    • This loads the API reference documentation and prepares the Http request settings.
  3. Click Test request to test run your request to the API and see the API's response.