Thank you for your interest in contributing! We love community contributions. Read on to learn how to contribute to AutoMQ. We appreciate first time contributors and we are happy to assist you in getting started. In case of questions, just reach out to us via Wechat Group or Slack!
Before getting started, please review AutoMQ's Code of Conduct. Everyone interacting in Slack or Wechat follow Code of Conduct.
Most of the issues open for contributions are tagged with 'good first issue.' To claim one, simply reply with 'pick up' in the issue and the AutoMQ maintainers will assign the issue to you. If you have any questions about the 'good first issue' please feel free to ask. We will do our best to clarify any doubts you may have. Start with this tagged good first issue
The usual workflow of code contribution is:
- Fork the AutoMQ repository.
- Clone the repository locally.
- Create a branch for your feature/bug fix with the format
{YOUR_USERNAME}/{FEATURE/BUG}
( e.g.jdoe/source-stock-api-stream-fix
) - Make and commit changes.
- Push your local branch to your fork.
- Submit a Pull Request so that we can review your changes.
- Link an existing Issue
that does not include the
needs triage
label to your Pull Request. A pull request without a linked issue will be closed, otherwise. - Write a PR title and description that follows the Pull Request Template.
- An AutoMQ maintainer will trigger the CI tests for you and review the code.
- Review and respond to feedback and questions by AutoMQ maintainers.
- Merge the contribution.
Pull Request reviews are done on a regular basis.
:::info Please make sure you respond to our feedback/questions and sign our CLA.
Pull Requests without updates will be closed due inactivity. :::
Requirement | Version |
---|---|
Compiling requirements | JDK 17 |
Compiling requirements | Scale 2.13 |
Running requirements | JDK 17 |
Tips: You can refer the document to install Scale 2.13
Build AutoMQ is the same with Apache Kafka. Kafka uses Gradle as its project management tool. The management of Gradle projects is based on scripts written in Groovy syntax, and within the Kafka project, the main project management configuration is found in the build.gradle file located in the root directory, which serves a similar function to the root POM in Maven projects. Gradle also supports configuring a build.gradle for each module separately, but Kafka does not do this; all modules are managed by the build.gradle file in the root directory.
It is not recommended to manually install Gradle. The gradlew script in the root directory will automatically download Gradle for you, and the version is also specified by the gradlew script.
./gradlew jar -x test
Refer this doc to install localstack to mock a local s3 service or use AWS S3 service directly.
If you are using localstack then create a bucket with the following command:
aws s3api create-bucket --bucket ko3 --endpoint=http://127.0.0.1:4566
Modify the config/kraft/server.properties
file. The following settings need to be changed:
s3.endpoint=https://s3.amazonaws.com
# The region of S3 service
# For Aliyun, you have to set the region to aws-global. See https://www.alibabacloud.com/help/zh/oss/developer-reference/use-amazon-s3-sdks-to-access-oss.
s3.region=us-east-1
# The bucket of S3 service to store data
s3.bucket=ko3
Tips: If you're using localstack, make sure to set the s3.endpoint to http://127.0.0.1:4566, not localhost. Set the region to us-east-1. The bucket should match the one created earlier.
Generated Cluster UUID:
KAFKA_CLUSTER_ID="$(bin/kafka-storage.sh random-uuid)"
Format Metadata Catalog:
bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c config/kraft/server.properties
Item | Value |
---|---|
Main | core/src/main/scala/kafka/Kafka.scala |
ClassPath | -cp kafka.core.main |
VM Options | -Xmx1 -Xms1G -server -XX:+UseZGC -XX:MaxDirectMemorySize=2G -Dkafka.logs.dir=logs/ -Dlog4j.configuration=file:config/log4j.properties -Dio.netty.leakDetection.level=paranoid |
CLI Arguments | config/kraft/server.properties |
Environment | KAFKA_S3_ACCESS_KEY=test;KAFKA_S3_SECRET_KEY=test |
tips: If you are using localstack, just use any value of access key and secret key. If you are using real S3 service, set
KAFKA_S3_ACCESS_KEY
andKAFKA_S3_SECRET_KEY
to the real access key and secret key that have read/write permission of S3 service.
We welcome Pull Requests that enhance the grammar, structure, or fix typos in our documentation.
Another crucial way to contribute is by reporting bugs and helping other users in the community.
You're welcome to enter the Community Slack and help other users or report bugs in GitHub.
This contributing document is adapted from that of Airbyte.