POST /v2/embed

This endpoint returns text embeddings. An embedding is a list of floating point numbers that captures semantic information about the text that it represents.

Embeddings can be used to create text classifiers as well as empower semantic search. To learn more about embeddings, see the embedding page.

If you want to learn more how to use the embedding model, have a look at the Semantic Search Guide.

Servers

Request headers

Name Type Required Description
Content-Type String Yes The media type of the request body.

Default value: "application/json"

X-Client-Name String No

The name of the project that is making the request.

Request body fields

Name Type Required Description
inputs[] Array No

An array of inputs for the model to embed. Maximum number of inputs per call is 96. An input can contain a mix of text and image components.

inputs[].content[] Array Yes

An array of objects containing the input data for the model to embed.

input_type String Yes

Specifies the type of input passed to the model. Required for embedding models v3 and higher.

  • "search_document": Used for embeddings stored in a vector database for search use-cases.
  • "search_query": Used for embeddings of search queries run against a vector DB to find relevant documents.
  • "classification": Used for embeddings passed through a text classifier.
  • "clustering": Used for the embeddings run through a clustering algorithm.
  • "image": Used for embeddings with image input.

Possible values:

  • "search_document"
  • "classification"
  • "search_query"
  • "image"
  • "clustering"
embedding_types[] Array No

Specifies the types of embeddings you want to get back. Can be one or more of the following types.

  • "float": Use this when you want to get back the default float embeddings. Supported with all Embed models.
  • "int8": Use this when you want to get back signed int8 embeddings. Supported with Embed v3.0 and newer Embed models.
  • "uint8": Use this when you want to get back unsigned int8 embeddings. Supported with Embed v3.0 and newer Embed models.
  • "binary": Use this when you want to get back signed binary embeddings. Supported with Embed v3.0 and newer Embed models.
  • "ubinary": Use this when you want to get back unsigned binary embeddings. Supported with Embed v3.0 and newer Embed models.

Default value: [ "float" ]

images[] Array No

An array of image data URIs for the model to embed. Maximum number of images per call is 1.

The image must be a valid data URI. The image must be in either image/jpeg or image/png format and has a maximum size of 5MB.

Image embeddings are supported with Embed v3.0 and newer models.

texts[] Array No

An array of strings for the model to embed. Maximum number of texts per call is 96.

truncate String No

One of NONE|START|END to specify how the API will handle inputs longer than the maximum token length.

Passing START will discard the start of the input. END will discard the end of the input. In both cases, input is discarded until the remaining input is exactly the maximum input token length for the model.

If NONE is selected, when the input exceeds the maximum input token length an error will be returned.

Possible values:

  • "START"
  • "END"
  • "NONE"

Default value: "END"

max_tokens Integer No

The maximum number of tokens to embed per input. If the input text is longer than this, it will be truncated according to the truncate parameter.

output_dimension Integer No

The number of dimensions of the output embedding. This is only available for embed-v4 and newer models. Possible values are 256, 512, 1024, and 1536. The default is 1536.

model String Yes

ID of one of the available Embedding models.

How to start integrating

  1. Add HTTP Task to your workflow definition.
  2. Search for the API you want to integrate with and click on the name.
    • This loads the API reference documentation and prepares the Http request settings.
  3. Click Test request to test run your request to the API and see the API's response.