Lambda polls your Apache Kafka topic partitions for new records and invokes your Lambda function synchronously. To update other AWS resources that your cluster uses, your Lambda function—as well as your AWS Identity and Access Management (IAM) users and roles—must have permission to perform these actions.
This page describes how to grant permission to Lambda and other users of your self-managed Kafka cluster.
To create and store logs to a log group in Amazon CloudWatch Logs, your Lambda function must have the following permissions in its execution role:
Your Lambda function might need permission to describe your AWS Secrets Manager secret or your AWS Key Management Service (AWS KMS) customer managed CMK, or to access your virtual private cloud (VPC).
If your Kafka users access your Apache Kafka brokers over the internet, you must specify a Secrets Manager secret. For more information, see Using SASL/SCRAM authentication.
Your Lambda function might need permission to describe your Secrets Manager secret or decrypt your AWS KMS customer managed CMK. To access these resources, your function's execution role must have the following permissions:
If only users within your VPC access your self-managed Apache Kafka cluster, your Lambda function needs permission to access your Amazon Virtual Private Cloud (Amazon VPC) resources, including your VPC, subnets, security groups, and network interfaces. To access these resources, your function's execution role must have the following permissions:
- ec2:CreateNetworkInterface
- ec2:DescribeNetworkInterfaces
- ec2:DescribeVpcs
- ec2:DeleteNetworkInterface
- ec2:DescribeSubnets
- ec2:DescribeSecurityGroups
To access other AWS services that your self-managed Apache Kafka cluster uses, Lambda uses the permission policies that you define in your function's execution role.
By default, Lambda isn't permitted to perform the required or optional actions for a self-managed Apache Kafka cluster. You must create and define these actions in an IAM trust policy, and then attach the policy to your execution role. This example shows how you might create a policy that allows Lambda to access your Amazon VPC resources.
{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Action":[
"ec2:CreateNetworkInterface",
"ec2:DescribeNetworkInterfaces",
"ec2:DescribeVpcs",
"ec2:DeleteNetworkInterface",
"ec2:DescribeSubnets",
"ec2:DescribeSecurityGroups"
],
"Resource":"arn:aws:ec2:us-east-1:01234567890:instance/my-instance-name"
}
]
}
For information about creating a JSON policy document on the IAM console, see Creating policies on the JSON tab in the IAM User Guide.
By default, IAM users and roles don't have permission to perform event source API operations. To grant access to users in your organization or account, you might need to create an identity-based policy. For more information, see Controlling access to AWS resources using policies in the IAM User Guide.
User name and password authentication for a self-managed Apache Kafka cluster uses Simple Authentication and Security Layer/Salted Challenge Response Authentication Mechanism (SASL/SCRAM). SCRAM uses secured hashing algorithms and doesn't transmit plaintext passwords between the client and server. For more information about SASL/SCRAM authentication, see RFC 5802.
To set up user name and password authentication for your self-managed Kafka cluster, create a secret in AWS Secrets Manager. Your non-AWS cloud provider must provide your user name and password in SASL/SCRAM format. For example:
{
"username": "ab1c23de",
"password": "qxbbaLRG7JXYN4NpNMVccP4gY9WZyDbp"
}
For more information, see Tutorial: Creating and retrieving a secret in the AWS Secrets Manager User Guide.