- Website: getkong.org
- Docs: getkong.org/docs
- Mailing List: Google Groups
- Gitter Chat: mashape/kong
Kong was built for securing, managing and extending APIs & Microservices. If you're building for web, mobile or IoT you will likely end up needing to implement common functionality on top of your actual software. Kong can help by acting as a gateway for any HTTP resource while providing logging, authentication and other functionality through plugins.
- CLI: Control your Kong cluster from the command line just like Neo in the matrix.
- REST API: Kong can be operated with its RESTful API for maximum flexibility.
- Scalability: Distributed by nature, Kong scales horizontally simply by adding nodes.
- Performance: Kong handles load with ease by scaling and using nginx at the core.
- Plugins: Extendable architecture for adding functionality to Kong and APIs.
- Logging: Log requests and responses to your system over TCP, UDP or to disk.
- Monitoring: Live monitoring provides key load and performance server metrics
- Authentication: Manage consumer credentials query string and header tokens.
- Rate-limiting: Block and throttle requests based on IP or authentication.
- Transformations: Add, remove or manipulate HTTP params and headers on-the-fly.
- CORS: Enable cross origin requests to your APIs that would otherwise be blocked.
- Anything: Need custom functionality? Extend Kong with your own Lua plugins!
Powered by NGINX and Cassandra with a focus on high performance and reliability, Kong runs in production at Mashape where it has handled billions of API requests for over ten thousand APIs.
Full versioned documentation is available at GetKong.org:
We set Kong up on AWS and load tested it to get some performance metrics. The Kong setup consisted of two m3.medium
EC2 instances; one for Kong, the other running Cassandra.
Both servers had their limits increased:
Added fs.file-max=80000
to /etc/sysctl.conf
Added the following lines to: /etc/security/limits.conf
* soft nproc 80000
* hard nproc 80000
* soft nofile 80000
* hard nofile 80000
For these benchmarks a third server running an optimized "hello world" web server written in C was used as the target, while not exactly "real world usage" not having the target as a bottleneck we hope to get a more accurate assessment of Kong itself.
After adding the target_url
into the Kong instance we load tested while ramping up from 1 to 2000 concurrent connections over 120 seconds. All together 117,185 requests with an average of 976 req/second or about 84,373,200 req/day went through Kong and back with only a single timeout.
Please see the CONTRIBUTING.md if you would like to have your changes merged into Kong.
- Clone the repository and make it your working directory.
- Run
[sudo] make install
This will build and install the kong
luarock globally.
- Run
make dev
This will install development dependencies and create your environment configuration files:
kong_TESTS.yml
kong_DEVELOPMENT.yml
- Run the tests:
make test-all
-
Run Kong with the development configuration file:
$ kong start -c kong_DEVELOPMENT.yml
When developing, use the Makefile
for doing the following operations:
Name | Description |
---|---|
install |
Install the Kong luarock globally |
dev |
Setup your development environment |
run |
Run the DEVELOPMENT environment (kong_DEVELOPMENT.yml ) |
seed |
Seed the DEVELOPMENT environment (kong_DEVELOPMENT.yml ) |
drop |
Drop the DEVELOPMENT environment (kong_DEVELOPMENT.yml ) |
lint |
Lint Lua files in kong/ |
coverage |
Run unit tests + coverage report (only unit-tested modules) |
test |
Run the unit tests |
test-all |
Run all unit + integration tests at once |
Kong is provided under the MIT License.