Systems | Development | Analytics | API | Testing

Observability

Managing Cloud Service Logs: Why It's Difficult and How to Simplify It

Logs are one of the three key “pillars” of observability, and cloud environments are no exception. You can’t know what’s happening in your cloud without analyzing cloud service logs, which allow you to audit and monitor workflows within your cloud. That said, cloud logging is a unique beast in certain respects.

Fundamentals of Data Observability Driven Development

Before the data era, data engineers and data scientists had few resources, few technologies, and few data to build something from. But they also had little pressure from the business to create new values, and above all, it was easier to find some time to write, check and implement their applications. It had the advantage of better control of quality.

CodeZero Launches Surf, a New Developer Tool for Observability in Pre-production Kubernetes Environments

CodeZero announces the general availability of Surf, a developer tool that provides real-time, collaborative, rich-querying capability for pre-production Kubernetes environments. Surf allows developers and teams to de-mystify Kubernetes artifacts, provides powerful troubleshooting capabilities and improves collaboration.

Why Legacy Observability Tools Don't Work for Modern Data Stacks

Whether they know it or not, every company has become a data company. Data is no longer just a transactional byproduct, but a transformative enabler of business decision-making. In just a few years, modern data analytics has gone from being a science project to becoming the backbone of business operations to generate insights, fuel innovation, improve customer satisfaction, and drive revenue growth. But none of that can happen if data applications and pipelines aren’t running well.

Why Data Engineers, Scientists, and Analysts Need Data Observability.

Data has become the lifeblood of most organizations. Yet, despite using data almost daily to make critical business decisions, few organizations have complete visibility into the health and usage of their data. Moreover, as the acceleration of data usage has increased, so too has the complexity of data systems, increasing the risks of data-related issues and making it even more difficult to identify and resolve issues related to data quickly.

Understanding SLOs Role in Data Quality Management

In our last article, we introduced the topic of SLAs (Service Level Agreements) and how they are necessary within organizations to help both consumers and producers agree on expectations around data usage and quality. Not only do SLAs provide visibility into what needs to be achieved to ensure data reliability and avoid surprises, but SLAs also create communication flows between consumers and producers that help ensure an alignment on expectations.

GraphQL vs REST API: Which is better for API Observability?

API providers need to observe their APIs to get meaningful data about whether and how they are consumed in practice. API observability is a form of monitoring that passively logs API traffic to an observability service. Different from traditional API monitoring, with API observability you: Monitor interactions to improve developer experience Understand how customers use your API Troubleshoot your API Observing REST APIs is well understood and supported, but not every API is a REST API.

Why SLAs Are Critical to Ensuring Data Reliability.

As far back as the 1920s, Service Level Agreements (SLA) were used to guarantee a certain level of service between two parties. Back then, it was the on-time delivery of printed AR reports. Today, SLAs define service standards such as uptime and support responsiveness to ensure reliability. The benefit of having an SLA in place is that it establishes trust at the start of new customer relationships and sets expectations.

Beyond Observability for the Modern Data Stack

The term “observability” means many things to many people. A lot of energy has been spent—particularly among vendors offering an observability solution—in trying to define what the term means in one context or another. But instead of getting bogged down in the “what” of observability, I think it’s more valuable to address the “why.” What are we trying to accomplish with observability? What is the end goal?

Data Observability Driven Development | The perfect analogy for beginners

When explaining what Data Observability Driven Development (DODD) is and why it should be a best practice in any data ecosystem, using food traceability as an analogy can be helpful. The purpose of food traceability is to be able to know exactly where food products or ingredients came from and what their state is at each moment in the supply chain. It is a standard practice in many countries, and it applies to almost every type of food product.