POST /buckets

Use this method to create a new bucket for a particular table or dataset. Use version 2 of the REST API to associate the bucket with a table, and use version 1 of the REST API to associate the bucket with a dataset. You must supply a JSON format body with your request.

When you create a bucket for a particular table or dataset, you must describe the structure of the data files you'll upload to the bucket. The file structure:

Specifies a particular table to associate with the bucket. Describes the file schema in the form of parsing options and field information.

The schema you specify must match the schema of the files you upload to the bucket.

Consider these rules when using buckets:

Secured by
Migration Notes

Servers

Request headers

Name Type Required Description
Content-Type String Yes The media type of the request body.

Default value: "application/json"

Request body fields

Name Type Required Description
id String No
documentation String No
name String No
description String No
targetDataset Object No
targetDataset.id String No
targetDataset.descriptor String No
schema Object No
schema.parseOptions Object No
schema.parseOptions.fieldsEnclosedBy String No
schema.parseOptions.ignoreTrailingExtraFields Boolean No
schema.parseOptions.recordsDelimitedBy String No
schema.parseOptions.fieldsEnclosingCharacterEscapedBy String No
schema.parseOptions.ignoreTrailingWhitespacesInQuotes String No
schema.parseOptions.commentCharacter String No
schema.parseOptions.charset String No
schema.parseOptions.headerLinesToIgnore Integer No
schema.parseOptions.ignoreLeadingWhitespacesInQuotes String No
schema.parseOptions.fieldsDelimitedBy String No
schema.parseOptions.ignoreTrailingMissingFields Boolean No
schema.parseOptions.type String No
schema.parseOptions.ignoreTrailingWhitespaces Boolean No
schema.parseOptions.ignoreLeadingWhitespaces Boolean No
schema.fields[] Array No
schema.fields[].id String No
schema.fields[].name String No
schema.fields[].parseFormat String No

Required for Date fields. The Date format that the values in this field must match to be recognized as a Date field.

schema.fields[].context Object No
schema.fields[].context.id String No
schema.fields[].context.descriptor String No
schema.fields[].description String No

The description of the field. The description must contain less than 1,000 characters.

schema.fields[].ordinal Integer No

The order of the field in the file (index). The ordinal values:

  • Must start with 1.
  • Must be contiguous.
  • Can't skip any number.
  • Have a maximum of 1,000.
schema.fields[].type Object No
schema.fields[].type.id String No

Valid values:

  • "Schema_Field_Type=Date"
  • "Schema_Field_Type=Currency"
  • "Schema_Field_Type=Multi_Instance"
  • "Schema_Field_Type=Text"
  • "Schema_Field_Type=Numeric"
  • "Schema_Field_Type=Integer"
  • "Schema_Field_Type=Boolean"
  • "<id>"
  • "Schema_Field_Type=Decimal"
  • "Schema_Field_Type=Instance"
  • "Schema_Field_Type=Long"
schema.fields[].type.name String No
  • Boolean - True/False value
  • Numeric - Numeric
  • Text - String
  • Date - Datetime
  • Currency - Currency
  • Instance - ID of an instance
  • Multi_Instance - List of IDs of instances. Use this format: "type": { "id": "Schema_Field_Type=type" }

Valid values:

  • "Date"
  • "Instance"
  • "Numeric"
  • "Text"
  • "Multi_Instance"
  • "Currency"
  • "Boolean"
schema.fields[].precision Integer No

Required for Numeric fields. The maximum number of digits in a Numeric field. This value includes all digits to the left and right of the decimal point. The maximum value is 38.

schema.fields[].scale Integer No

Required for Numeric fields. The maximum number of digits to the right of the decimal point in a Numeric field. This value must be less than the precision value.

schema.fields[].businessObject Object No
schema.fields[].businessObject.id String No
schema.fields[].businessObject.descriptor String No
displayName String No
state Object No
state.id String No
state.descriptor String No
  • New - When you first create a bucket, the state is New. You can load files into a bucket in the New state.
  • Queued - Workday changes the state to Queued after you run the complete call on a bucket, but Workday is busy processing other buckets. When a processing slot becomes available, Workday changes the state back to Processing.
  • Processing - Workday changes the state to Processing immediately after you run the complete call on a bucket. If a processing slot is available, then the state remains as Processing, and Workday starts running the background processes to move the data into the table or dataset. When Workday finishes these background processes, it changes the state to either Success, Warning, or Failed, depending on the status.
  • Success - When Workday successfully loads all rows into the table, the state is Success.
  • Warning - When Workday successfully loads rows into the table, but encounters some invalid rows, the state is Warning.
  • Failed - When Workday isn't able to move the data to the table or dataset for any reason, the state is Failed.
  • Canceled - When a bucket is in the Processing state for 1 or more hours, you can force Workday to stop trying to move data into the table or dataset. You stop this process using the Cleanup Bucket State task. After running this task on the bucket, the state is Cancelled.
  • Loading -
  • Timed out -
  • Aborted -

Valid values:

  • "New"
  • "Queued"
  • "Processing"
  • "Success"
  • "Loading"
  • "Timed out"
  • "Warning"
  • "Aborted"
  • "Canceled"
  • "Failed"

How to start integrating

  1. Add HTTP Task to your workflow definition.
  2. Search for the API you want to integrate with and click on the name.
    • This loads the API reference documentation and prepares the Http request settings.
  3. Click Test request to test run your request to the API and see the API's response.