Systems | Development | Analytics | API | Testing

Kong

Kong Raises $175M to Power the API World

No AI without APIs — as we say here at Kong. I’m excited to officially share our oversubscribed Kong Series E financing at a $2 billion valuation! We raised $175 million in both primary and secondary, led again by Tiger Global and co-led by new investor Balderton as well as new strong participation from long-only Teachers’ Venture Growth (the late-stage growth investment arm of Ontario Teachers’ Pension Plan).

Faster Config Updates in Hybrid Mode with Incremental Config Sync

In Kong Gateway 2.0, we released Hybrid Mode, also known as Control Plane/Data Plane separation. With it, our customers could efficiently and securely deploy clusters of Kong Gateway Dataplanes on their on-prem, private, and public clouds in any combination they wanted, and they could control the entire cluster from a single point, the Control Plane. Hybrid Mode became instantly popular with all our customers with large and varied deployments, due to the increased flexibility and ease of management.

Kong Gateway Operator 1.4: Konnect Edition

It’s here! The Kong Gateway Operator release we teased at API Summit is now available for you all to use. KGO 1.4 allows you to configure Kong Konnect using CRDs, attach your DataPlane resources to Konnect with minimal configuration, and even manage the deployment of custom plugins using Kubernetes CRDs. Keep reading to find out more.

Transforming AI API Access: Writer's Kong Konnect Integration

At API Summit 2024, a common theme covered was how there's no AI without APIs. This couldn’t be more true for Writer, the full-stack generative AI platform for enterprises — and a notable user of the Kong Konnect unified API platform. Writer recently launched AI Studio, billed as “the fastest way to build generative AI apps.” AI Studio uses Writer API, Writer Framework, and Writer No Code to give people enterprise-grade generative AIs they can put in their own apps and services.

Goldman Sachs: Leveraging AI and APIs to Serve Business and Clients

For over a century, Goldman Sachs has been one of the most recognizable names in multinational investment banking and financial services. When they speak, the market listens. So what does Goldman Sachs have to say about APIs and AI when it comes to serving its clients and fueling business success? Kong had a chance to ask these questions to Rohan Deshpande, Managing Director at the company, at API Summit 2024.

Swiftly Prototype and Test Your APIs With Konnect Serverless Gateways

At API Summit, Kong announced the upcoming beta release of a brand new gateway deployment method in Kong Konnect: Serverless Gateways. Designed for speed, simplicity of use, and affordability, it’s the quickest way to deploy a Kong Gateway. Today, we're pumped to announce that all Konnect Plus users can deploy a fully managed Serverless Gateway!

How to Implement API Product Tiering with Kong Konnect

In this blog post, we'll talk about how to implement API product tiering, a powerful way to customize access, control usage, and monetize APIs. This model can help you scale efficiently while maintaining a smooth user experience. Read on to learn more! We'll be talking about how to implement API product tiering with Kong Konnect, an API lifecycle management platform designed from the ground up for the cloud-native era and delivered as a service.

How to Manage Your API Policies with OPA (Open Policy Agent)

APIs are essential to modern applications, but managing access and security policies can be complex. Traditional access control mechanisms can fall short when flexible, scalable, and fine-grained control over who can access specific resources is needed. This is where OPA (Open Policy Agent) steps in. OPA provides a unified framework for consistently defining and enforcing policies across microservices, APIs, Kubernetes clusters, and beyond. Consistent policy management is essential for enterprises.

RAG Application with Kong AI Gateway, AWS Bedrock, Redis and LangChain

For the last couple of years, Retrieval-Augmented Generation (RAG) architectures have become a rising trend for AI-based applications. Generally speaking, RAG offers a solution to some of the limitations in traditional generative AI models, such as accuracy and hallucinations, allowing companies to create more contextually relevant AI applications.