Systems | Development | Analytics | API | Testing

Resume tokens and last-event IDs for LLM streaming: How they work & what they cost to build

When an AI response reaches token 150 and the connection drops, most implementations have one answer: start over. The user re-prompts, you pay for the same tokens twice, and the experience breaks. Resume tokens and last-event IDs are the mechanism that prevents this. They make streams addressable – every message gets an identifier, clients track their position, and reconnections pick up from exactly where they left off. The concept is straightforward.

The Future of Data & AI is Anywhere Cloud! #Cloudera #AI #Tech #Shorts

Experience a true anywhere cloud with the only data and AI platform that delivers a complete cloud experience regardless of your location. By providing unified security and governance, you can securely access 100% of your data across both on-premises and cloud environments.

WSO2 AI Guardrails: PII Masking, Prompt Injection & Safety

Generative AI offers incredible potential, but it comes with real risks like data leakage and prompt attacks. In this video, we demonstrate how WSO2 AI Guardrails act as an intelligent filter to secure your AI integrations and ensure compliance. We walk through the configuration of four critical advanced guardrails to inspect both incoming requests and outgoing responses, helping you move from risky experiments to safe, reliable production services.

What Is an Agentic Semantic Layer, and Why Does It Matter?

AI can now generate SQL, build dashboards, and answer questions in plain language. But generating queries isn’t the same as understanding a business. The model might not know which revenue definition finance approves, how your fiscal calendar works, or which fields require restricted access. As AI agents become the front door to analytics, the real challenge isn’t query generation; it’s semantic grounding. That’s where the Agentic Semantic Layer becomes essential.

Best AI test automation tools for fast, high-quality releases

The promise of test automation was simple: automate repetitive testing tasks, catch bugs faster, and ship quality software at scale. Yet for most development teams, that promise remains unfulfilled. Traditional test automation frameworks demand specialized coding skills, require constant maintenance when applications change, and create bottlenecks that slow down release cycles rather than accelerate them.

Leveraging the MCP Registry in Kong Konnect for Dynamic Tool Discovery

As enterprises start deploying AI agents into real systems, a new architectural challenge is emerging. Agents need a reliable way to discover tools, services, and capabilities dynamically, instead of relying on hardcoded integrations. This is where the Model Context Protocol (MCP) ecosystem is rapidly evolving. MCP servers expose tools and capabilities that AI agents can use. However, once organizations begin deploying multiple MCP servers across environments, the question becomes clear.

Enterprise AI Infrastructure Security Series - 3) Configuration Governance with Administrator Vaults

Securing ClearML for the Enterprise — Part 3: Configuration Governance with Administrator Vaults In this video we walk through ClearML's vault system — how personal vaults and administrator vaults work, and how administrator vaults let you enforce platform-level policies on storage locations, container images, and credentials across your teams and service accounts. What we cover.

AI Coding Agents Have a UX Problem Nobody Wants to Talk About

The pitch was simple: let AI write your code so you can focus on the hard problems. Three years into the AI coding revolution, and developers are focused on hard problems alright, just not the ones anyone expected. Instead of designing systems and solving business logic, engineers in 2026 spend a startling amount of their day managing the AI itself. Should you use Fast Mode or Deep Thinking? Haiku or Opus? Cursor or Claude Code or Windsurf? Should you write a SKILL.md file or a custom system prompt?

Why Your AI Pilot Won't Make It to Production (And What to Do About It)

Most AI pilots fail to reach production not because the models don’t work, but because enterprises struggle with data governance. While pilot-phase AI projects demonstrate impressive results in controlled environments, they hit governance walls when moving to enterprise-scale deployments. This post examines why AI initiatives stall before production and provides a governance-focused approach for breaking the cycle.