POST /responses
Servers
- https://api.openai.com/v1
Request headers
Name | Type | Required | Description |
---|---|---|---|
Content-Type |
String | Yes |
The media type of the request body.
Default value: "application/json" |
Request body fields
Name | Type | Required | Description |
---|---|---|---|
stream |
Boolean | No |
If set to true, the model response data will be streamed to the client as it is generated using server-sent events. See the Streaming section below for more information. Default value: false |
input |
Object | Yes |
Text, image, or file inputs to the model, used to generate a response. Learn more:
|
temperature |
Number | No |
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or Default value: 1 |
previous_response_id |
String | No |
The unique ID of the previous response to the model. Use this to create multi-turn conversations. Learn more about conversation state. |
tools[] |
Array | No |
An array of tools the model may call while generating a response. You
can specify which tool to use by setting the The two categories of tools you can provide the model are:
|
metadata |
Object | No |
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
parallel_tool_calls |
Boolean | No |
Whether to allow the model to run tool calls in parallel. Default value: true |
reasoning |
Object | No |
o-series models only Configuration options for reasoning models. |
reasoning.generate_summary |
String | No |
Deprecated: use A summary of the reasoning performed by the model. This can be
useful for debugging and understanding the model's reasoning process.
One of Possible values:
|
reasoning.effort |
String | No |
o-series models only Constrains effort on reasoning for
reasoning models.
Currently supported values are Possible values:
Default value: "medium" |
reasoning.summary |
String | No |
A summary of the reasoning performed by the model. This can be
useful for debugging and understanding the model's reasoning process.
One of Possible values:
|
truncation |
String | No |
The truncation strategy to use for the model response.
Possible values:
Default value: "disabled" |
tool_choice |
Object | No |
How the model should select which tool (or tools) to use when generating
a response. See the |
top_p |
Number | No |
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or Default value: 1 |
model |
Object | Yes | |
max_output_tokens |
Integer | No |
An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens. |
text |
Object | No |
Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:
|
text.format |
No |
An object specifying the format that the model must output. Configuring The default format is Not recommended for gpt-4o and newer models: Setting to |
|
include[] |
Array | No |
Specify additional output data to include in the model response. Currently supported values are:
|
user |
String | No |
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more. |
service_tier |
String | No |
Specifies the latency tier to use for processing the request. This parameter is relevant for customers subscribed to the scale tier service:
When this parameter is set, the response body will include the Possible values:
Default value: "auto" |
store |
Boolean | No |
Whether to store the generated model response for later retrieval via API. Default value: true |
instructions |
String | No |
Inserts a system (or developer) message as the first item in the model's context. When using along with |
How to start integrating
- Add HTTP Task to your workflow definition.
- Search for the API you want to integrate with and click on the name.
- This loads the API reference documentation and prepares the Http request settings.
- Click Test request to test run your request to the API and see the API's response.