AI Chat Task
The AI Chat task is a system task that integrates AI-powered chat completion capabilities. It uses pre-trained language models, such as GPT (Generative Pre-trained Transformer), to generate natural, human-like responses to user inputs.
There are several tabs available where you can configure a AI Chat task:
- General tab
- Input Parameters tab
- Configuration tab
- Retry tab
- Routing tab
The configuration of the General, Input Parameters, and Routing tabs is similar to that of a user task. We discuss only the Configuration tab here.
Configuration tab
From the Configuration tab, set up the following fields:
|
Field |
Required |
Description |
|---|---|---|
|
Credentials |
Yes |
The saved credentials to use for the chat model. |
|
Chat model |
Yes |
A configuration for chat model by a JSON Object. |
Chat model field top-level fields
|
Field |
Type |
Required |
Description |
|---|---|---|---|
|
|
JSON Array |
Yes |
A list of messages that the model should follow. Please provide either the user messages, or system messages, or both. User messages are user-provided prompts or additional context information. System messages are developer-provided instructions that the model should follow, regardless of messages sent by the user. |
|
|
|||
|
|
String |
Yes |
The Large language model (LLM) provider to use. Supported Providers:
|
|
|
String |
Yes |
The model to use from the LLM provider. |
|
|
String |
No |
Required if the provider is AzureOpenAI. The endpoint from the Azure AI OpenAI |
|
|
Number |
No |
Scales randomness of token selection. Lower is more deterministic and higher is more varied. |
|
|
Number |
No |
Samples from the smallest set of tokens whose cumulative probability ≥ p, ignoring the long tail. We generally recommend altering this or temperature but not both. |
|
|
Integer |
No |
Samples only from the top K most probable tokens at each step, discarding the rest. |
|
|
JSON Object |
No |
Ensures the model will always generate responses that adhere to the supplied JSON Schema for the supported provider. SimWorkflow adopts the 2020-12 version. |
|
|
JSON Array |
No |
List of MCP (Model Context Protocol) servers the LLM can use. See the top-level fields of the MCP Server JSON object for details. |
MCP Server JSON object top-level fields
|
Field |
Type |
Required |
Description |
|---|---|---|---|
|
|
Credentials Id |
No |
The saved credentials to use for the MCP server. |
|
|
String |
Yes |
The MCP server transport type. Supported Types: StreamableHTTP, SSE |
|
|
String |
Yes |
The base URL of the MCP server. |
|
|
String |
No |
The endpoint to use for MCP server. Default is |
|
|
Integer |
No |
The timeout for an MCP server request, in seconds. Default is 20 seconds. |
|
|
JSON Array |
No |
Select a specific subset of tools from the MCP server, rather than exposing all of them to your agent. |
Note:
-
The following fields in a Chat Model can reference task inputs using JSONPath expressions, the expression must be inside a placeholder
${ }:-
userMessages -
systemMessages -
model
-
-
To escape a JSONPath expression, prefix it with an extra
$character, e.g.,$${}. See JSONPath for more information about JSONPath. -
Alternatively, JSONata expressions can be used instead of JSONPath by prefixing the expression with
jsonata:within${}placeholders. See JSONata for more information about JSONata.
Here is an example AI Chat model:
{
"userMessages": [
"Extract price as number and availability as boolean from https://www.costco.com.au/Appliances/Kitchen-Appliances/Air-Fryers-Deep-Fryers/Cuisinart-Express-Air-Fry-Oven-TOA-65XA/p/88195 and return a JSON payload"
],
"provider": "OpenAI",
"model": "gpt-5-nano",
"responseJSONSchema": {
"type": "object",
"properties": {
"inStock": {
"type": "boolean"
},
"price": {
"type": "number"
}
},
"required": [
"inStock",
"price"
],
"additionalProperties": false
}
}
The Test AI chat button allows you to test the AI Chat model configuration with the task input parameters JSON data.
Response for AI chat model
The response for AI hat model is a JSON Object with the following fields:
|
Field |
Type |
Description |
|---|---|---|
|
|
JSON Object |
The AI chat model result. |
|
|
JSON Object |
The AI chat model usage JSON Object. |
Usage JSON object top-level fields
|
Field |
Type |
Description |
|---|---|---|
|
|
Number |
The number of tokens used in the prompt of the AI request. |
|
|
Number |
The number of tokens returned in the generation of the AI's response. |
|
|
Number |
The total number of tokens from both the prompt of an AI request and generation of the AI's response. |