Systems | Development | Analytics | API | Testing

How a 'Safe' D365 Update Paralysed a Tier-1 Hospital

Every business running Dynamics 365 Finance and Operations (D365 F&O) relies on the promise of “Evergreen Updates,” which provide seamless, continuous improvement with zero disruption. For a major hospital which was already stretched thin by rising costs and patient loads, the latest update was supposed to be a lifeline, delivering new financial automation features.

Continuous Quality Signals: Connecting Jira, Zephyr and BugSnag for Risk-Based Testing

Engineering teams want to understand the real health of their applications – not just what was planned or what was tested, but what is actually happening in production. The challenge is that these signals live in different systems, each optimized for a specific part of the delivery lifecycle. Test execution data, issue tracking, and production monitoring each describe a different aspect of system behavior. On their own, they answer narrow questions about validation, delivery, or stability.

7 RAG Evaluation Tools You Must Know

RAG evaluation measures how effectively a system retrieves relevant context and uses it to generate grounded answers. These evaluations detect hallucinations, measure retrieval precision and reveal where pipelines degrade after model updates or knowledge-base changes. Engineers rely on these tools to maintain output quality, prevent regressions, validate prompt and architecture choices and ensure that production answers stay aligned with trusted sources.

Operationalizing Agentic AI with Hitachi iQ Studio and NVIDIA Nemotron 3

NVIDIA just announced NVIDIA Nemotron 3, a new family of open models, datasets, and libraries designed to support long-context reasoning and multi-step AI workflows. With the ability to work across enterprise ecosystems, this family of models empowers enterprises to build and deploy reliable multi-agent systems at scale, offering an important set of technologies at a pivotal moment in AI evolution.

What's the strategic role of Studio in the larger platform?

Studio is the foundation of the entire Katalon platform. Every product, from TrueTest to TestCloud, relies on Studio’s automation capabilities. Because all tests are ultimately Studio tests, it serves as the core engine enabling faster testing and powering execution, reporting, and CI/CD integration across the platform. — Alex Martins, VP of Strategy at Katalon Follow Katalon for more insights in our series!

Katalon Product Roundup - November 2025

November brings expanded on-premise flexibility, deeper analytics customization, and smoother cross-platform automation. TestOps adds on-prem Jira and GitHub integrations, and custom chart creation for faster insights. Studio introduces new MCP Server tools that automate test object management end-to-end. TestCloud simplifies mobile app version handling with dynamic applications and now supports secure live testing on private environments.

Requirements Engineering: 9 Best Practices with Examples

If you work in a regulated industry like medical device design, automotive engineering, or aerospace, you understand that requirements are the bedrock of product development. Without clear, effective requirements, your team risks compliance failures, scope creep, and costly rework. Requirements engineering is the disciplined application of proven principles, methods, and tools to describe the behavior of a proposed system.

Ep 53 | Women Leaders in Tech: Cloudera's Mary Wells on Building Confidence and Community in Tech

Cloudera Chief Marketing Officer Mary Wells helped spearhead the launch of Cloudera’s Women Leaders in Technology initiative, which recently celebrated its one-year anniversary. To honor this milestone, Mary joins this special edition of The AI Forecast to reflect on the impact of this program so far. Mary Wells and host Paul Muller embark on an inspiring conversation on leadership, allyship, and navigating one of the most transformative moments in technology.

Reaching Autonomous Software Quality | From the Bear Cave

AI has changed how software is written, but testing and quality haven’t kept up at the same pace. In the first episode of our "From the Bear Cave" discussion series, SmartBear CEO Dan Faulkner and VP of Product, AI, Bridges Smith talk through the Levels of Autonomy in software development, why non-code-based testing must become more autonomous, and what it will take to close the gap between AI coding and ensuring trustworthy software.