Systems | Development | Analytics | API | Testing

%term

Overcoming API Development Challenges: API Standardization and Governance

In Episode 1 of our "Overcoming API Development Challenges" series, we will look at how software development teams can use tooling to standardize their APIs and create enforceable governance practices. We will be highlighting the role that a tool like SwaggerHub can play in an organization's API design.

How Customer Success Teams Should Monitor Account Health and API Usage

Leading customer success for developer-first or API-first businesses is quite different from traditional enterprise software. The best API products are designed to be self-serve and hands-off, meaning customers rarely need to sign into a web portal once implementation is done. If you’re a Stripe or Twilio customer, when’s the last time you signed into their web portal? Hopefully not recently, otherwise that may imply a problem or issue.

Inventory management with BigQuery and Cloud Run

Many people think of Cloud Run just as a way of hosting websites. Cloud Run is great at that, but there's so much more you can do with it. Here we'll explore how you can use Cloud Run and BigQuery together to create an inventory management system. I'm using a subset of the Iowa Liquor Control Board data set to create a smaller inventory file for my fictional store. In my inventory management scenario we get a csv file dropped into Cloud Storage to bulk load new inventory.

Protecting Personal Data: GDPR, CCPA, and the Role of ETL

The growth of data has been exponential. By 2023, it's anticipated that approximately 463 exabytes (EB) will be created every day. To put this into perspective, one exabyte is a unit equivalent to 1 billion gigabytes. By 2021, 320 billion emails will be sent daily, many of which contain personal information. Data collected around the globe contains the type of information that businesses leverage to make more informed decisions.

Using Xplenty with Parquet for Superior Data Lake Performance

Building a data lake in Amazon S3 using AWS Spectrum to query the data from a Redshift cluster is a common practice. However, when it comes to boosting performance, there are some tricks that are worth learning. One of those is using data in Parquet format, which Redshift considers a best practice. Here's how to use Parquet format with Xplenty for the best data lake performance.