Systems | Development | Analytics | API | Testing

Where AI Goes Wrong - The Blind Spots Testers See | Rahul Parwal | Testflix2025 | #testingcommunity

AI promises speed, but testers are often the first to notice where it quietly breaks down. Beneath the impressive outputs lie hidden issues like hallucinations, false confidence, and blind spots that can easily go unnoticed yet cause real damage if left unchecked. This atomic talk explores the subtle ways AI can fail, why speed without reliability is risky, and how testers play a critical role in supervising and strengthening AI systems. It highlights practical strategies for working alongside AI to make its outputs more trustworthy, reliable, and genuinely useful.

Resilience Testing of a Tester | Ashwini Lalit | Testflix2025 | #testingcommunity

Testers are great at finding flaws in systems. But what happens when the system under test is the tester themselves? In today’s world of constant change, rising stress, and growing uncertainty, resilience has become just as critical as technical skill. From handling pushback and tight timelines to navigating burnout and self-doubt, testers face pressures that often go unseen.

Testing Business Logic with Ease | Alex Schwartz | Testflix2025 | #testingcommunity

Business logic sits at the core of every system, yet it often becomes the hardest part to change. Rules get buried in complex code, feedback arrives too late, and even small updates turn into slow, risky efforts. This session explores why teams struggle with something that should be simple and how to bring clarity back into the process.

Breaking Your Own Bots | Robin Gupta | Testflix 2025 | #testingcommunity

As AI agents take on critical roles in testing and automation, their weaknesses can become hidden risks waiting to surface. This session explores how applying red teaming techniques, borrowed from cybersecurity, can help uncover vulnerabilities in AI agents before they cause real-world failures.

Agentic QA Workflow | Krishnamoorthy Gurramkonda | Testflix 2025 | #testingcommunity

Agentic code generation has dramatically accelerated development, but QA often remains slowed by coordination gaps. Manual planning, delayed handoffs, and fragmented reviews continue to create friction, while AI agents operate in isolation without orchestration or governance. This session explores how to compose these siloed agents into a unified, AI-powered STLC where workflows are dependency-aware, auditable, and automatically triggered as soon as prerequisites are met.

Beyond Numbers, Metrics that matter in AI Age | Brijesh Deb | Testflix 2025 | #testingcommunity

AI has transformed how software is built and tested, yet many teams still rely on traditional metrics like pass rates, coverage, and defect counts. While these numbers look good on dashboards, they often fail to answer the most important question in the AI era. Can we actually trust what the system is doing?

Before Building AI we should First Understand Natural Intelligence | Andrew Brown | Testflix 2025

Before building artificial intelligence, it’s worth asking whether we truly understand natural intelligence. Just as early pioneers of flight studied the principles of aerodynamics and observed how birds fly, this session argues that progress in AI requires a deeper understanding of human intelligence and the knowledge that already exists across related disciplines.

Bias in, Bias Out: Knowing various Biases in Testing AI | Maheshwaran VK | Testflix 2025 |

Just like humans, AI systems are shaped by how they are brought up. In the case of Large Language Models, this upbringing happens through data collection, training, and productization. At each of these stages, bias can quietly enter the system through the data we select, the way models are trained, or the assumptions embedded into the final product. These biases, whether intentional or accidental, influence how models think, respond, and interact with users in the real world.

From Copilot to Co-Tester: Guardrails for AI-Written Tests | Dimpy Adhikary | Testflix 2025 |

Generative AI can produce tests instantly, but speed alone does not guarantee quality or safety. Without proper validation, AI-written tests can become brittle, redundant, or misleading, creating a false sense of coverage. This session looks at the risks of relying on AI-generated tests without the right controls in place.

Leading and Managing in Dysfunctional Organisations | Alan Richardson | Testflix2025

Leadership today is facing a serious gap. Many people step into leadership and management roles without a clear understanding of how to lead, manage, or genuinely support their teams. While the fundamentals of leadership are simple, poor leadership makes it necessary to revisit the basics. Drawing from experience as a consultant, manager, individual contributor, and leader, this session focuses on what truly works and what consistently fails.

Defining Enough: Testing in the GenAI Era | SatParkash Maurya | Testflix 2025 | #testingcommunity

In machine learning, an 85% accurate model is often considered a success because we accept that data is messy, the real world is unpredictable, and chasing perfection is rarely worth the cost. However, in software testing, especially in the GenAI era, the question of “Can we test 100%?” still comes up. With AI systems producing probabilistic outputs where the same input can lead to different results, absolute coverage is unrealistic. Confidence scores already tell us that uncertainty is part of the system, and testing needs to acknowledge that reality.

Thinking Ahead: SDET Career Progression | David Ingraham | Testflix2025 | #testingcommunity

The role of the Software Development Engineer in Test has evolved far beyond just writing tests. Today, SDETs operate at the intersection of quality, development, product, and automation, creating opportunities that extend well beyond traditional testing roles. This session explores the core responsibilities of an SDET and how these skills form a strong foundation for multiple career paths.