Systems | Development | Analytics | API | Testing

Latest Videos

Sauce Labs Community Office Hours: Troubleshooting Test Results - Solving Failures with Confidence

Struggling to make sense of test failures? In this Sauce Labs Community Office Hours session, we’ll walk you through practical troubleshooting techniques to help you quickly identify root causes, resolve issues efficiently, and boost the reliability of your automated test suite. Learn how to approach test failures with confidence and keep your automation running smoothly.

Is AI's Evolution Making a Positive Impact?

Are we really in the “future of AI,” or are we just learning how to coexist with it? On this episode of Test Case Scenario, we explore: Why critical thinking and clear expectations make AI a tool, not a threat. How understanding its strengths—and its limits—keeps it a net positive. The gradual evolution of AI’s role in productivity, creativity, and problem-solving. The key to success? Making AI work for what it does best.

Chasing the Silver Bullet in Tech

Are we stuck in a cycle of quick fixes and passing the buck? Tech problems today feel eerily familiar, just on a faster timeline—two-week sprints instead of nine-month cycles. Yet, management keeps hunting for the elusive silver bullet, often leaving the cleanup for the next in line. Is the short tenure of tech roles fueling this carousel of deferred accountability? On this episode of Test Case Scenario, we explore why real innovation requires more than just quick fixes and flashy solutions. Let’s break the cycle.

Smarter AI Adoption

AI promises efficiency, but are we implementing it the right way? @Marcus Merrell shares what’s critical to track AI usage and its impact: “Here’s the prompt I used to get this tool, and here are the changes I made to make it work.” This kind of transparency is non-negotiable. Start small with a group of mixed experience levels to uncover both benefits and risks before scaling. If AI adds overhead without solving core issues, is it truly worth the investment?

Is AI Falling Short of Expectations?

AI tools like Copilot and ChatGPT promised to revolutionize development workflows, but are they delivering or just creating new headaches? The stats speak volumes: 92% of developers say AI increases the blast radius of bad code 67% are spending more time debugging AI-generated code 59% face deployment errors at least half the time when using AI tools So, are we making strides toward innovation or spinning in circles of hype? @Marcus Merrell put it best: “This stuff was supposed to already start paying off by now. So why isn’t it working?”

More AI, More Problems?

AI was supposed to be the game-changer for developer productivity, but reality isn’t living up to the hype. GPT-4 took 50x the resources of GPT-3.5, yet the improvement? Barely noticeable. AI-generated code isn’t saving time—it’s creating more debugging, security headaches, and compliance risks. The real issue? It’s not the AI—it’s how we’re using it. AI isn’t freeing up developers for innovation—it’s adding more noise. So, what’s the fix? Catch the full conversation on the latest Test Case Scenario.

Rethinking AI's Role in Leadership, Governance, and Productivity

AI is reshaping development, but is it meeting expectations? In this episode of Test Case Scenario, Jason Baum and Marcus Merrell explore the evolving role of AI in software development, drawing insights from recent industry reports. They discuss whether AI tools are living up to their promise of reducing burnout and boosting productivity while examining the complexities of debugging, security risks, and governance gaps.

The Secret to Better Collaboration? Speak the Same Language

When teams use different programming languages, code becomes territorial. Your code. My code. Your problem. My problem. But when teams align on a single language, those barriers disappear. Suddenly, collaboration is effortless. Debugging isn’t someone else’s job—it’s everyone’s. As Selenium developers, every feature has to work across five languages. AI helps bridge the gap, but the real game-changer? A shared language that makes moving across the codebase seamless.

AI Won't Fix Testing-But It Might Break It

AI is being treated as a shortcut for quality. Is that a dangerous gamble? There are a few industry-wide experiments happening right now: Developers are being pushed to own quality, but without dedicated testers, gaps are forming. AI is being used as a crutch for testing, but can it actually replace critical thinking? The real risk? We won’t know how badly this approach fails until it’s too late.

AI Won't Replace Testers-It'll Challenge Them to Think Smarter

AI isn’t a shortcut to perfect testing. It won’t magically fix your processes or write flawless code. But if used right, it will push testers and developers to think more critically. Instead of asking if AI should be part of testing, the real question is how to make it a true collaborator. That means: Using AI to highlight gaps, not blindly trusting its output Treating it as a thought partner, not an automation machine.