POST /2015-03-31/event-source-mappings/

Creates a mapping between an event source and an Lambda function. Lambda reads items from the event source and invokes the function.

For details about how to configure different event sources, see the following topics.

The following error handling options are available only for stream sources (DynamoDB and Kinesis):

For information about which configuration parameters apply to each event source, see the following topics.

Servers

Request headers

Name Type Required Description
X-Amz-Content-Sha256 String No
X-Amz-Credential String No
Content-Type String Yes The media type of the request body.

Default value: "application/json"

X-Amz-Date String No
X-Amz-Algorithm String No
X-Amz-SignedHeaders String No
X-Amz-Security-Token String No
X-Amz-Signature String No

Request body fields

Name Type Required Description
DocumentDBEventSourceConfig Object No

Specific configuration settings for a DocumentDB event source.

DocumentDBEventSourceConfig.CollectionName String No

The name of the collection to consume within the database. If you do not specify a collection, Lambda consumes all collections.

DocumentDBEventSourceConfig.DatabaseName String No

The name of the database to consume within the DocumentDB cluster.

DocumentDBEventSourceConfig.FullDocument String No

Determines what DocumentDB sends to your event stream during document update operations. If set to UpdateLookup, DocumentDB sends a delta describing the changes, along with a copy of the entire document. Otherwise, DocumentDB sends only a partial document that contains the changes.

Possible values:

  • "Default"
  • "UpdateLookup"
BisectBatchOnFunctionError Boolean No

(Kinesis and DynamoDB Streams only) If the function returns an error, split the batch in two and retry.

DestinationConfig Object No

A configuration object that specifies the destination of an event after Lambda processes it.

FunctionName String Yes

The name of the Lambda function.

Name formats

  • Function nameMyFunction.

  • Function ARNarn:aws:lambda:us-west-2:123456789012:function:MyFunction.

  • Version or Alias ARNarn:aws:lambda:us-west-2:123456789012:function:MyFunction:PROD.

  • Partial ARN123456789012:function:MyFunction.

The length constraint applies only to the full ARN. If you specify only the function name, it's limited to 64 characters in length.

BatchSize Integer No

The maximum number of records in each batch that Lambda pulls from your stream or queue and sends to your function. Lambda passes all of the records in the batch to the function in a single call, up to the payload limit for synchronous invocation (6 MB).

  • Amazon Kinesis – Default 100. Max 10,000.

  • Amazon DynamoDB Streams – Default 100. Max 10,000.

  • Amazon Simple Queue Service – Default 10. For standard queues the max is 10,000. For FIFO queues the max is 10.

  • Amazon Managed Streaming for Apache Kafka – Default 100. Max 10,000.

  • Self-managed Apache Kafka – Default 100. Max 10,000.

  • Amazon MQ (ActiveMQ and RabbitMQ) – Default 100. Max 10,000.

  • DocumentDB – Default 100. Max 10,000.

MaximumBatchingWindowInSeconds Integer No

The maximum amount of time, in seconds, that Lambda spends gathering records before invoking the function. You can configure MaximumBatchingWindowInSeconds to any value from 0 seconds to 300 seconds in increments of seconds.

For streams and Amazon SQS event sources, the default batching window is 0 seconds. For Amazon MSK, Self-managed Apache Kafka, Amazon MQ, and DocumentDB event sources, the default batching window is 500 ms. Note that because you can only change MaximumBatchingWindowInSeconds in increments of seconds, you cannot revert back to the 500 ms default batching window after you have changed it. To restore the default batching window, you must create a new event source mapping.

Related setting: For streams and Amazon SQS event sources, when you set BatchSize to a value greater than 10, you must set MaximumBatchingWindowInSeconds to at least 1.

ParallelizationFactor Integer No

(Kinesis and DynamoDB Streams only) The number of batches to process from each shard concurrently.

FunctionResponseTypes[] Array No

(Kinesis, DynamoDB Streams, and Amazon SQS) A list of current response type enums applied to the event source mapping.

MaximumRetryAttempts Integer No

(Kinesis and DynamoDB Streams only) Discard records after the specified number of retries. The default value is infinite (-1). When set to infinite (-1), failed records are retried until the record expires.

Queues[] Array No

(MQ) The name of the Amazon MQ broker destination queue to consume.

FilterCriteria Object No

An object that contains the filters for an event source.

FilterCriteria.Filters[] Array No

A list of filters.

SelfManagedKafkaEventSourceConfig Object No

Specific configuration settings for a self-managed Apache Kafka event source.

SelfManagedKafkaEventSourceConfig.ConsumerGroupId String No

The identifier for the Kafka consumer group to join. The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. For more information, see Customizable consumer group ID.

SelfManagedEventSource Object No

The self-managed Apache Kafka cluster for your event source.

ScalingConfig Object No

(Amazon SQS only) The scaling configuration for the event source. To remove the configuration, pass an empty value.

ScalingConfig.MaximumConcurrency Integer No

Limits the number of concurrent instances that the Amazon SQS event source can invoke.

SourceAccessConfigurations[] Array No

An array of authentication protocols or VPC components required to secure your event source.

SourceAccessConfigurations[].URI String No

The value for your chosen configuration in Type. For example: "URI": "arn:aws:secretsmanager:us-east-1:01234567890:secret:MyBrokerSecretName".

SourceAccessConfigurations[].Type String No

The type of authentication protocol, VPC components, or virtual host for your event source. For example: "Type":"SASL_SCRAM_512_AUTH".

  • BASIC_AUTH – (Amazon MQ) The Secrets Manager secret that stores your broker credentials.

  • BASIC_AUTH – (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL/PLAIN authentication of your Apache Kafka brokers.

  • VPC_SUBNET – (Self-managed Apache Kafka) The subnets associated with your VPC. Lambda connects to these subnets to fetch data from your self-managed Apache Kafka cluster.

  • VPC_SECURITY_GROUP – (Self-managed Apache Kafka) The VPC security group used to manage access to your self-managed Apache Kafka brokers.

  • SASL_SCRAM_256_AUTH – (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL SCRAM-256 authentication of your self-managed Apache Kafka brokers.

  • SASL_SCRAM_512_AUTH – (Amazon MSK, Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL SCRAM-512 authentication of your self-managed Apache Kafka brokers.

  • VIRTUAL_HOST –- (RabbitMQ) The name of the virtual host in your RabbitMQ broker. Lambda uses this RabbitMQ host as the event source. This property cannot be specified in an UpdateEventSourceMapping API call.

  • CLIENT_CERTIFICATE_TLS_AUTH – (Amazon MSK, self-managed Apache Kafka) The Secrets Manager ARN of your secret key containing the certificate chain (X.509 PEM), private key (PKCS#8 PEM), and private key password (optional) used for mutual TLS authentication of your MSK/Apache Kafka brokers.

  • SERVER_ROOT_CA_CERTIFICATE – (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key containing the root CA certificate (X.509 PEM) used for TLS encryption of your Apache Kafka brokers.

Possible values:

  • "VPC_SECURITY_GROUP"
  • "VPC_SUBNET"
  • "SASL_SCRAM_512_AUTH"
  • "BASIC_AUTH"
  • "VIRTUAL_HOST"
  • "CLIENT_CERTIFICATE_TLS_AUTH"
  • "SASL_SCRAM_256_AUTH"
  • "SERVER_ROOT_CA_CERTIFICATE"
StartingPosition String No

The position in a stream from which to start reading. Required for Amazon Kinesis, Amazon DynamoDB, and Amazon MSK Streams sources. AT_TIMESTAMP is supported only for Amazon Kinesis streams and Amazon DocumentDB.

Possible values:

  • "AT_TIMESTAMP"
  • "LATEST"
  • "TRIM_HORIZON"
StartingPositionTimestamp String No

With StartingPosition set to AT_TIMESTAMP, the time from which to start reading.

AmazonManagedKafkaEventSourceConfig Object No

Specific configuration settings for an Amazon Managed Streaming for Apache Kafka (Amazon MSK) event source.

AmazonManagedKafkaEventSourceConfig.ConsumerGroupId String No

The identifier for the Kafka consumer group to join. The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. For more information, see Customizable consumer group ID.

EventSourceArn String No

The Amazon Resource Name (ARN) of the event source.

  • Amazon Kinesis – The ARN of the data stream or a stream consumer.

  • Amazon DynamoDB Streams – The ARN of the stream.

  • Amazon Simple Queue Service – The ARN of the queue.

  • Amazon Managed Streaming for Apache Kafka – The ARN of the cluster.

  • Amazon MQ – The ARN of the broker.

  • Amazon DocumentDB – The ARN of the DocumentDB change stream.

MaximumRecordAgeInSeconds Integer No

(Kinesis and DynamoDB Streams only) Discard records older than the specified age. The default value is infinite (-1).

TumblingWindowInSeconds Integer No

(Kinesis and DynamoDB Streams only) The duration in seconds of a processing window for DynamoDB and Kinesis Streams event sources. A value of 0 seconds indicates no tumbling window.

Enabled Boolean No

When true, the event source mapping is active. When false, Lambda pauses polling and invocation.

Default: True

Topics[] Array No

The name of the Kafka topic.

How to start integrating

  1. Add HTTP Task to your workflow definition.
  2. Search for the API you want to integrate with and click on the name.
    • This loads the API reference documentation and prepares the Http request settings.
  3. Click Test request to test run your request to the API and see the API's response.