About two and a half years ago, Kong first announced our Kubernetes Ingress Controller. We were stepping up to invest in the Kubernetes community by building a full-featured API gateway that operated in a Kubernetes-native way. Since then, we – as well as the rest of the broader Kubernetes ecosystem – have hit a number of additional milestones. Our Ingress Controller has run in tens of thousands of Kubernetes clusters, and we’ve continued to expand its functionality and stability.
Today, we’re excited to announce a new research project we’ve been kicking around at Kong: Kong Embedded! If you’ve used the Kong Gateway before or heard us talk about it, one of the things we’re very proud of is that Kong Gateway uses a very small resource footprint. It’s a small download in size, is blazingly fast on even constrained hardware, and uses very little memory.
We are proud today to announce the future of service connectivity – Kong Konnect! Kong Konnect is the only full stack connectivity platform that is designed from the ground up for the cloud native era, delivered as a service. It accelerates the journey to microservices, secures and governs APIs and services, and it allows developers to rapidly design, publish and consume APIs and services. Konnect was built from the ground up with the unique needs of developers, architects and operators in mind.
In 2018, Kong was first positioned onto the Gartner Magic Quadrant for Full Lifecycle API Management Market as a Visionary. This in itself was a very impressive feat given that Kong was started as an open source project just three years before that. I believe Kong’s progress on the Magic Quadrant in this short span of time speaks to how we have aligned Kong’s solutions to our customers’ most challenging problems.
Kong for Kubernetes is a Kubernetes Ingress Controller and a full-fledged edge-router which can route traffic to any destination of your choice. In addition to Ingress management, it provides enhanced security and management capabilities. With Kong, you can use Kubernetes not just for running your workloads but also for securing and monitoring connectivity between your workloads – all managed via Kubernetes manifests .
When we first created Kuma – which means “bear” in Japanese – we dreamed of creating a service mesh that could run across every cluster, every cloud and every application. These are all requirements that large organizations must implement to support their application teams across a wide variety of architectures and platforms: VMs, Kubernetes, AWS, GCP and so on.