Kong is very easy to get up and running: start an instance, configure a service, configure a route pointing to the service, and off it goes routing requests, applying any plugins you enable along the way. But Kong can do a lot more than connecting clients to services via routes.
Kubernetes is fundamentally changing container orchestration; is your stack ready to support it at scale? Watch the talk recording to learn how Kong’s Kubernetes Ingress Controller can power-drive your APIs and microservices on top of the Kubernetes platform. Hear Kong engineers walk through the process of setting up the Ingress controller and review its various features.
Last June, we released the first phase of multi-cloud deployment options for Qlik Sense Enterprise – our first in a series of steps in delivering unparalleled flexibility and choice in how you deploy analytics across public and private clouds, and on-premise.
In a hundred years’ time, when the world’s tech writers look back on our primitive technology and chart the rise of the smartphone, they’ll pinpoint three years as being crucial to the technology. The first will be 1994, which saw the release of the IBM Simon, a prototype for the smartphones we recognize today. The second will 2007, when the first iPhone went on sale. The third will be 2019.
In my last blog I described how to achieve continuous integration, delivery and deployment of Talend Jobs into Docker containers with Maven and Jenkins. This is a good start for reliably building your containerized jobs, but the journey doesn't end there. The next step to go further with containerized jobs is scheduling, orchestrating and monitoring them.
From personalizing your initial prompt to asking the right questions, here are some proven ways for improving the response rate from your live chat efforts.
In a previous post, we explained how the team at Kong thinks of the term “service mesh.” In this post, we’ll start digging into the workings of Kong deployed as a mesh. We’ll talk about a hypothetical example of the smallest possible deployment of a mesh, with two services talking to each other via two Kong instances – one local to each service.