Systems | Development | Analytics | API | Testing

Panel Discussion - AI in Automation: Accelerating Scripts and Execution | Testflix 2025

AI in automation is rapidly emerging as a powerful enabler for testers. From automation script generation utilities to simplifying API testing and framework development, these capabilities promise to accelerate productivity. But as testers embrace assistants like Copilot and Cursor, big questions emerge - What does this mean to the future of open-source frameworks like Selenium and Playwright? How should testers balance the speed of AI-generated code with the need for reliability and maintainability? ⁠How do we measure productivity gains from AI-paired programming? And AI doesn't stop at coding.

Sauce AI for Insights - Product Demo

See Sauce AI for Insights in action and discover how you can eliminate hours of manual work, accelerate release cycles, and ship with confidence. This on-demand demo walks you through our suite of purpose-built AI Agents that transform complex test data and workflows into instant, actionable intelligence. In this video, you will see how Sauce AI.

Playwright MCP: Turn Natural Language into Reliable Tests in Minutes | Vignesh Srinivasa Raghavan

Model Context Protocol (MCP) lets AI agents use real tools safely. In this talk, we’ll see how Playwright MCP bridges agents and a real browser by leveraging the accessibility tree (not screenshots) to navigate pages, locate elements, perform actions, and extract data—then export stable Playwright tests you can commit.

Measuring the Impact of AI in QA and Automation | Jaydeep Chakrabarty | Testflix 2025

In this fireside chat with Jaydeep, we’ll dive into how AI is changing the way we measure success in both QA processes and live generative AI bots. On the QA side, we’ll look at cycle time reduction—the “time goalie” metric that shows how quickly we move from discovering a bug to fixing it. We’ll also talk about predictive quality accuracy, which shifts QA from being reactive to proactive by predicting which code changes are most likely to introduce bugs. And of course, we’ll touch on test creation velocity—how much faster teams are able to create meaningful automation with AI’s support.

Evaluating AI Tools: Practical Framework for Testers & Leaders | Ajay Balamurugadas | Testflix 2025

The AI ecosystem is exploding with tools that promise to accelerate delivery, improve quality, and transform the way we work. Yet for many teams, evaluating these tools is overwhelming - flashy demos and marketing claims rarely answer the real questions: Will this work in our context? Can it scale? Is it sustainable?

Vibe Coding: Emergence, Impact & Future of AI-Driven Development | Andrew Knight | Testflix 2025

In this session, Andrew will trace Vibe Coding's journey—from emergence to current impact—exploring how it has got us into re-thinking development and testing. He'll examine today's tools, real-world use cases, and the cultural shifts teams need to embrace this AI-driven approach. Andrew will share hot takes on myths versus reality and deliver practical advice for getting started. This video is of one of the sessions presented at - World’s Leading Virtual Software Testing Conference.