Why we chose Kong

Thu, Dec 14, 2017 3-minute read
Derrick Hinkle
Derrick Hinkle

At XO, we use NGINX extensively and to great effect across our stack. It serves to handle complex routing tasks to allow multiple services act as one or to distribute load across clusters. We also use it to handle SSL termination. NGINX is fast, powerful and extremely rock solid.

However, when we use it to glue multiple smaller web apps together, a wide variety of our engineers need to update configurations and routes on a very regular basis. Historically this means pairing with an engineer who is used to writing NGINX’s custom config DSL and having to redeploy the entire NGINX layer. In the past when we’ve wanted to make it easier to get config changes made, we’ve used a combination of Google sheets and custom scripting to generate NGINX configs with a redeploy.

This works, but it’s cumbersome. The process can be slow and can hold up engineers. We wanted an easier way to configure our proxy layer, preferably on the fly. We wanted something with similar speed and power to NGINX, that gave us the same amount of flexibility.

This led us to choose Kong.

Built on NGINX

NGINX exposes a pure Lua scripting interface for adding features into NGINX without having to modify or recompile the C source code. Kong takes advantage of this to expose a full-featured restful API layer to configure NGINX without invasive modifications to NGINX or building their own internal proxy platform. Instead, kong is written directly in Lua and runs on top of NGINX.

This means that we get the same rock-solid stability and blazingly fast performance of NGINX that we use everywhere already. Kong is additive and lower risk of adopting an entirely new proxy platform. Kong allows us to still use NGINX configs if needed, or to configure our proxy dynamically via a restful interface.

Kong is also designed and packaged to run nicely in Docker. Getting it up and running is as easy as running any other docker image and it can be deployed like everything else in our stack. It can even be run on developer machines for testing if need be, without messy manual installs.

Choice of Control Panels

There are a variety of control panels available for Kong, from an official option to several open source ones. These include the very fully featured konga and the more simple kong-dashboard. While konga is a beautiful control panel with a ton of features, it depends on having it’s own authentication database and being configured ahead of time. In our case, we wanted to have the control panel be ephemeral and be launched on demand locally to a developers machine in a VPN setting, so we chose the simple but excellent kong-dashboard to be spun up in docker via a Makefile command.

So far kong has been a pleasure to set up and work with. The documentation is well laid out and the addition of a control panel makes configuration a breeze even for engineers who don’t normally work with proxies. It has proven just as stable and performant as NGINX and has enabled us to move faster than before.