Systems | Development | Analytics | API | Testing

Latest News

What Is a Data Pipeline and Why Your Ecommerce Business Needs One

Our six key points on data pipelines include: Whether you’re a one-person show reselling items on an online marketplace or a large Ecommerce enterprise with hundreds of employees, these businesses share a common factor: both generate data. The size of your business can influence the amount of data you generate, sure. But any amount of data — if it’s not adequately accessible — is worthless. Every business, especially an Ecommerce business, needs a data pipeline.

Kafka best practices: Monitoring and optimizing the performance of Kafka applications

Apache Kafka is an open-source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications. Administrators, developers, and data engineers who use Kafka clusters struggle to understand what is happening in their Kafka implementations.

The Unpleasant Phenomenon In Agile Testing

The iterative approach to software development emerged around the 1990s. What started as a small co-located, self-sustaining team approach was widely adopted by many development teams. The agile mindset paved way for multiple development frameworks, including the infamous “Scrum methodology”. With time, processes undergo metamorphoses, during which a few unpleasant practices sneak in causing distractions within the teams.

CPU Profiling in N|Solid [3/10] The best APM for Node, layer by layer

Review your applications in detail with CPU Profiles in N|Solid and find opportunities for Improve code. You can use the CPU Profiler tool in N|Solid to see which processes consume the most percentage of CPU time. This functionality can give you an accurate view of how your application is running and where it is taking up the most resources.

Performance considerations for loading data into BigQuery

It is not unusual for customers to load very large data sets into their enterprise data warehouse. Whether you are doing an initial data ingestion with hundreds of TB of data or incrementally loading from your systems of record, performance of bulk inserts is key to quicker insights from the data. The most common architecture for batch data loads uses Google Cloud Storage(Object storage) as the staging area for all bulk loads.

Getting Started with XPath in Selenium

Selenium is the industry-standard, open-source testing automation framework. To implement Selenium, developers first need to use a locator to find dynamic web elements that help automate the cross-browser testing of their web applications. Selenium uses several locators, including XML Path (XPath). This blog explains how to use XPath as a web element locator in Selenium.