Systems | Development | Analytics | API | Testing

New Emerging Trends in the Quality Engineering Space with AI | Vanya Seth | Testflix 2025

Retrieval-Augmented Generation can sound convincing while still being wrong. This session focuses on moving beyond surface-level metrics and turning stochastic AI outputs into evidence-backed, verifiable results. It explores how to test the entire RAG pipeline, from ingestion and indexing to retrieval, grounding, and answerability, ensuring every response is traceable to the right source, policy, and user context.

How to Test RAG Pipelines for Reliable AI | Aparana Gupta | Testflix 2025 | #testingcommunity

Retrieval-Augmented Generation can sound convincing while still being wrong. This session focuses on moving beyond surface-level metrics and turning stochastic AI outputs into evidence-backed, verifiable results. It explores how to test the entire RAG pipeline, from ingestion and indexing to retrieval, grounding, and answerability, ensuring every response is traceable to the right source, policy, and user context.

Private AI: Gains, Gaps and Gotchas | Samar Ranjan | Testflix2025 | #testingcommunity

Private AI is emerging as a strong alternative for teams that need the power of AI without compromising data privacy or compliance. This session explores how local LLMs can support software development and test automation when cloud-based tools are not an option. Using setups like Ollama with models such as Qwen 2.5 and integrations like the Continue plugin, the talk demonstrates how secure, on-device AI can accelerate tasks like BDD creation, automation scripting, and performance testing.

Breaking Boundaries: A Tester's Guide to Freelance and Remote Success | Manish Saini | Testflix2025

Freelancing is more than a side hustle. It can be a launchpad to global careers, higher earning potential, and exposure to diverse teams and practices. This session shows how testers can start with small freelance gigs to build experience and credibility, then scale into larger, long-term engagements by specializing in areas like automation, performance, or QA consulting.

QA and Software Testing Job Landscape in the USA | Júlio de Lima | Testflix2025 | #testingcommunity

This Atomic Talk is based on an analysis of more than 500 QA and software testing job openings across the United States. The session walks through the research process, the data collected, and the key trends revealed through clear graphs and insights from the study. By the end of the talk, attendees will have a strong understanding of the most in-demand testing skills, tools, programming languages, and automation technologies currently shaping the QA job market in the U.S., helping them make more informed career and upskilling decisions.

Bias in, Bias Out : Knowing various Biases in Testing AI | Maheshwaran VK | Testflix 2025

Just like humans, AI systems are shaped by how they are brought up. In the case of Large Language Models, this upbringing happens through data collection, training, and productization. At each of these stages, bias can quietly enter the system through the data we select, the way models are trained, or the assumptions embedded into the final product. These biases, whether intentional or accidental, influence how models think, respond, and interact with users in the real world.

Effective Public Speaking | Johanna Rothman | Testflix2025 | #testingcommunity

As AI becomes more capable, many managers assume that knowledge workers can be easily replaced by machines. Yet innovation still comes from people learning, collaborating, and sharing ideas. Rather than worrying about replacement, knowledge workers can actively demonstrate their value by developing strong public speaking skills.

Building Quality in LLM-Powered Applications | Craig Risi | Testflix2025 | #testingcommunity

As organizations rapidly adopt Large Language Models, many discover that building reliable and trustworthy AI systems is far more complex than traditional software development. LLMs are non-deterministic, context-sensitive, and prone to issues like bias, hallucinations, and prompt injection, making quality assurance a deeper challenge than simple testing.

Testing Agentic AI | Robert Sabourin | Testflix2025 | #testingcommunity

This talk explores the challenges of testing agentic AI systems—AI that autonomously reacts to events and initiates processes. Drawing on decades of experience, Robert Sabourin emphasizes that testing begins and ends with risk. A three-dimensional model (business impact, technical risk, autonomy) guides evaluation. Testers generate ideas using a broad taxonomy, from capabilities and failure modes to creative and adversarial approaches. Continuous testing and monitoring ensure findings inform business decisions, emphasizing learning over correctness.

Where AI Goes Wrong - The Blind Spots Testers See | Rahul Parwal | Testflix2025 | #testingcommunity

AI promises speed, but testers are often the first to notice where it quietly breaks down. Beneath the impressive outputs lie hidden issues like hallucinations, false confidence, and blind spots that can easily go unnoticed yet cause real damage if left unchecked. This atomic talk explores the subtle ways AI can fail, why speed without reliability is risky, and how testers play a critical role in supervising and strengthening AI systems. It highlights practical strategies for working alongside AI to make its outputs more trustworthy, reliable, and genuinely useful.