AI Chat Task

The AI Chat task is a system task that integrates AI-powered chat completion capabilities. It uses pre-trained language models, such as GPT (Generative Pre-trained Transformer), to generate natural, human-like responses to user inputs.

There are several tabs available where you can configure a AI Chat task:

The configuration of the General, Input Parameters, and Routing tabs is similar to that of a user task. We discuss only the Configuration tab here.

Configuration tab

From the Configuration tab, set up the following fields:

Field

Required

Description

Credentials

Yes

The saved credentials to use for the chat model.

Chat model

Yes

A configuration for chat model by a JSON Object.

Chat model field top-level fields

Field

Type

Required

Description

userMessages

JSON Array

Yes

A list of messages that the model should follow. Please provide either the user messages, or system messages, or both.

User messages are user-provided prompts or additional context information.

System messages are developer-provided instructions that the model should follow, regardless of messages sent by the user.

systemMessages

provider

String

Yes

The Large language model (LLM) provider to use.

Supported Providers:

  • AmazonBedrockConverse

  • AnthropicClaude

  • AzureOpenAI

  • Cerebras

  • DeepSeek

  • GoogleGemini

  • Groq

  • MistralAI

  • NVIDIA

  • Ollama

  • OpenAI

  • Perplexity

model

String

Yes

The model to use from the LLM provider.

endpoint

String

No

Required if the provider is AzureOpenAI.

The endpoint from the Azure AI OpenAI Keys and Endpoint section under Resource Management.

temperature

Number

No

Scales randomness of token selection. Lower is more deterministic and higher is more varied.

topP

Number

No

Samples from the smallest set of tokens whose cumulative probability ≥ p, ignoring the long tail.

We generally recommend altering this or temperature but not both.

topK

Integer

No

Samples only from the top K most probable tokens at each step, discarding the rest.

responseJSONSchema

JSON Object

No

Ensures the model will always generate responses that adhere to the supplied JSON Schema for the supported provider.

SimWorkflow adopts the 2020-12 version.

mcpServers

JSON Array

No

List of MCP (Model Context Protocol) servers the LLM can use. See the top-level fields of the MCP Server JSON object for details.

MCP Server JSON object top-level fields

Field

Type

Required

Description

credentials

Credentials Id

No

The saved credentials to use for the MCP server.

transportType

String

Yes

The MCP server transport type.

Supported Types: StreamableHTTP, SSE

baseUrl

String

Yes

The base URL of the MCP server.

endpoint

String

No

The endpoint to use for MCP server.

Default is /mcp for StreamableHTTP transport type and /sse for SSE transport type.

requestTimeoutSeconds

Integer

No

The timeout for an MCP server request, in seconds. Default is 20 seconds.

toolFilter

JSON Array

No

Select a specific subset of tools from the MCP server, rather than exposing all of them to your agent.

Note:

Here is an example AI Chat model:

{
  "userMessages": [
    "Extract price as number and availability as boolean from https://www.costco.com.au/Appliances/Kitchen-Appliances/Air-Fryers-Deep-Fryers/Cuisinart-Express-Air-Fry-Oven-TOA-65XA/p/88195 and return a JSON payload"
  ],
  "provider": "OpenAI",
  "model": "gpt-5-nano",
  "responseJSONSchema": {
    "type": "object",
    "properties": {
      "inStock": {
        "type": "boolean"
      },
      "price": {
        "type": "number"
      }
    },
    "required": [
      "inStock",
      "price"
    ],
    "additionalProperties": false
  }
}

The Test AI chat button allows you to test the AI Chat model configuration with the task input parameters JSON data.

Response for AI chat model

The response for AI hat model is a JSON Object with the following fields:

Field

Type

Description

result

JSON Object

The AI chat model result.

usage

JSON Object

The AI chat model usage JSON Object.

Usage JSON object top-level fields

Field

Type

Description

promptTokens

Number

The number of tokens used in the prompt of the AI request.

completionTokens

Number

The number of tokens returned in the generation of the AI's response.

totalTokens

Number

The total number of tokens from both the prompt of an AI request and generation of the AI's response.