Systems | Development | Analytics | API | Testing

Serverless

Sustaining free compute in a hostile environment

One year ago, Heroku sunsetted its free tier. Today, we want to reaffirm our commitment to maintaining our free tier, dive into why offering a free tier for compute is complicated (we are looking at you crypto miners), take the time to explain how we intend to sustain it, and explain why we are so committed to providing a free tier. Long story short: we aim to keep a free tier thanks to how we control our costs.

Building a global deployment platform is hard, here is why

If you ever tried to go global, you have probably faced a reality check. A whole new set of issues starts to appear when you start to operate a workload over multiple locations across the globe: So it looks like a great idea in theory, but in practice, all of this complexity multiplies the number of failure scenarios to consider!

API Gateway and Service Mesh: Bridging the Gap Between API Management and Zero-Trust Architecture

Discover how API management and service mesh can go hand in hand toward secured platforms Over the last ten years, Kongers have witnessed hundreds of companies adopting a full lifecycle API management platform and have been working with the people behind the scenes, the “API tribes.” We’ve also learned from the field that API tribes most often have to deal with heterogeneous platforms, infrastructures, and clouds.

The Global Deployment Engine: How We Deploy Across Continents

We previously explored how we built our own Serverless Engine and a multi-region networking layer based on Nomad, Firecracker, and Kuma. But what about the architecture of the engine that orchestrates these components across the world? This is an interesting topic to work on and we thought it could be useful to share some internals out there. Put on your scuba equipment, this is a deep dive into our architecture and the story of how we built our own global deployment engine.

Top 5 Best Practices for Building Event-Driven Architectures Using Confluent and AWS Lambda

Confluent and AWS Lambda can be used for building real-time, scalable, fault-tolerant event-driven architectures, ensuring that your application logic is executed reliably in response to specific business events. Confluent provides a streaming SaaS solution based on Apache Kafka® and built on Kora: The Cloud Native Apache Kafka Engine, allowing you to focus on building event-driven applications without operating the underlying infrastructure.

Koyeb Metrics: Built-in Observability to Monitor Your Apps Performances

At Koyeb, we're working to build the most seamless way to deploy apps to production without worrying about infrastructure. But there's still plenty to keep you busy at the application layer with performance tuning and troubleshooting. That's why we're introducing Metrics — an easy way to monitor and troubleshoot application performance. Deploying on Koyeb makes thinking about infrastructure or orchestration unnecessary.

Accelerate Docker builds with cache

Speed and efficiency are paramount during the build process. If you use a Dockerfile to build your container images from source code, you want to know about build cache. In this blog post, we’ll talk about what happens when you create a Docker image using a Dockerfile, how caching works with Docker, and how to optimize your Dockerfiles to maximize the benefits of build cache with Docker and on Koyeb.

Dockerfile Deployment on High-Performance MicroVMs is GA

Today, we are excited to announce the support of Dockerfile based deployments in general availability. You can now deploy any GitHub repository that contains a Dockerfile across all our locations worldwide. It can be used to deploy APIs, full-stack applications as well as workers with no extra cost. Building and deploying using Dockerfiles offers more flexibility: you can deploy any kind of application, framework, and runtime, including with custom system dependencies.

Deploy and scale high-performance background jobs with Koyeb Workers

Today, we are thrilled to announce workers are generally available on Koyeb! You can now easily deploy high performance workers to process background jobs in all of our locations. It's now simple to deploy workers from a GitHub repository and rely on our built-in CI/CD engine: simply connect your repository and we build, deploy, and scale your workers on high-performance servers all around the world.