Speedscale creates load tests from recorded traffic so generating load is pretty core to what we do. As a brief overview, we record traffic from your service in one environment and replay it in another, optionally increasing load several fold. During a replay the Speedscale load generator makes requests against the system under test (SUT), with the responses from external dependencies like APIs or a payment processor optionally mocked out for consistency. Your service is the SUT here. Currently the load generator runs as a single process, usually inside a pod in Kubernetes. So how fast is this thing, and how did we get to where we are today?
In this Postman load testing tutorial, you'll learn how to run a large scale load test in Kubernetes using your existing Postman collections. Because HTTP services don't have a graphical user interface, it's common to build collections of requests using Postman during the development process. These collections are useful for running quick functionality tests as you develop each endpoint. However, as the service grows you eventually need to test it in a more realistic way with larger volume. This is called a load or stress test. Speedscale is a Production Data Simulation Platform that includes this stress/load testing capability out of the box.
By combining traffic replay capabilities from Speedscale with observability from Datadog, SRE Teams can deploy with confidence. It makes sense to centralize your monitoring data into as few silos as possible. With this integration, Speedscale will push the results of various traffic replay conditions into Datadog so it can be combined with the other observability data. Being able to preview application performance by simulating production conditions allows better release decisions. Moreover, a baseline to compare production metrics can provide even earlier signals on degradation and scale problems. Speedscale joined the Datadog Marketplace so customers can shift-left the discovery of performance issues.