Collections
All Collections
All Content


Jonathan Wall & Demetrios Brinkmann · Dec 19th, 2025
Everyone’s arguing about agents. Jonathan Wall says the real fight is about sandboxes, isolation, and why most “agent platforms” are doing it wrong.
# AI Agents
# Sandboxes
# Runloop.AI

Rosemary Nwosu-Ihueze · Dec 19th, 2025
MCP lets your agents connect to Slack, GitHub, your database, and whatever else you throw at it. Great for productivity. Terrible for security. When an agent can call any tool through any protocol, you've got a problem: who's actually making the request? What can they access? And when something breaks—or gets exploited—how do you even trace it back? This talk covers what breaks when agents go multi-protocol: authentication that doesn't account for agent delegation, permission models designed for humans not bots, and audit trails that disappear when Agent A spawns Agent B to call Tool C. I'll walk through real attack scenarios—prompt injection leading to unauthorized API calls, credential leakage across protocol boundaries, and privilege escalation through tool chaining. Then we'll dig into what actually works: identity verification at protocol boundaries, granular permissions that follow context, not just credentials, and audit systems built for non-human actors. You'll leave knowing how to implement MCP without turning your agent system into an attack surface, and what to build (or demand from vendors) to keep agent-to-tool communication secure.
# Agents in Production
# Prosus Group
# MCP Security
Comment


Simba Khadder & Demetrios Brinkmann · Dec 16th, 2025
Feature stores aren’t dead — they were just misunderstood.
Simba Khadder argues the real bottleneck in agents isn’t models, it’s context, and why Redis is quietly turning into an AI data platform. Context engineering matters more than clever prompt hacks.
# Context Engineering
# Featureform
# Redis

Médéric Hurier · Dec 16th, 2025
The traditional centralized data platform, characterized by rigid data warehouses and complex ETL pipelines, creates technical bottlenecks that severely slow down the delivery of business insights, forcing decision-makers to wait for overburdened data engineering teams. The open-source prototype Da2a proposes a radical new paradigm: a distributed, agentic ecosystem where specialized, autonomous agents (e.g., Marketing, E-commerce) manage their own domain data and collaborate via an Agent-to-Agent (A2A) protocol to answer complex, cross-domain queries. Instead of focusing on the engineering of data movement and storage, this approach is insight-focused, allowing an orchestrator agent to plan and delegate tasks, abstracting underlying complexity and enabling greater scalability, extensibility, and alignment with high-level business logic—a critical evolution for MLOps engineers looking to build more flexible and responsive data foundations.
# Generative AI Tools
# Artificial Intelligence
# Machine Learning
# Data Science
# AI Agent


Satish Bhambri & Demetrios Brinkmann · Dec 12th, 2025
Satish Bhambri is a Sr Data Scientist at Walmart Labs, working on large-scale recommendation systems and conversational AI, including RAG-powered GroceryBot agents, vector-search personalization, and transformer-based ad relevance models.
# AgenticRAG
# AI Engineer
# AI Agents

Tom Kaltofen · Dec 11th, 2025
Modern AI agents depend on vast amounts of context, data, features, and intermediate states, to make correct decisions. In practice, this context is often tied to specific datasets or infrastructure, leading to brittle pipelines and unpredictable behaviour when agents move from prototypes to production. This talk introduces mloda, an open‑source Python framework that makes data, feature, and context engineering shareable. By separating what you compute from how you compute it, mloda provides the missing abstraction layer for AI pipelines, allowing teams to build deterministic context layers that agents can rely on. Attendees will learn how mloda's plugin‑based architecture (minimal dependencies, BYOB design) enables clean separation of transformation logic from execution environments. We'll explore how built‑in input/output validation and test‑driven development will help you build strong contexts. The session will demonstrate how mloda can generate production‑ready data flows. Real‑world examples will show how mloda enables deterministic context layers from laptop prototypes to cloud deployments.
# Agents in Production
# Prosus Group
# Context Layers


Zack Reneau-Wedeen & Demetrios Brinkmann · Dec 10th, 2025
Sierra’s Zack Reneau-Wedeen claims we’re building AI all wrong and that “context engineering,” not bigger models, is where the real breakthroughs will come from. In this episode, he and Demetrios Brinkmann unpack why AI behaves more like a moody coworker than traditional software, why testing it with real-world chaos (noise, accents, abuse, even bad mics) matters, and how Sierra’s simulations and model “constellations” aim to fix the industry’s reliability problems. They even argue that decision trees are dead replaced by goals, guardrails, and speculative execution tricks that make voice AI actually usable. Plus: how Sierra trains grads to become product-engineering hybrids, and why obsessing over customers might be the only way AI agents stop disappointing everyone.
# AI Systems
# Agent Simulations
# AI Voice Agent

Kopal Garg · Dec 10th, 2025
Everyone obsesses over models, but NVIDIA’s stack makes it obvious: the real power move is owning everything around the model. NeMo trains it, RAPIDS cleans it, TensorRT speeds it up, Triton serves it, Operators manage it — and the hardware seals the deal.
It’s less a toolkit and more a gravity well for your entire GenAI pipeline. Once you’re in, good luck escaping.
# Generative AI
# AI Frameworks
# NVIDIA

Sam Partee · Dec 10th, 2025
Building agentic tools for production requires far more than a simple chatbot interface. The real value comes from agents that can reliably take action at scale, integrate with core systems, and execute tasks through secure, controlled workflows.
Yet most agentic tools never make it to production. Teams run into issues like strict security requirements, infrastructure complexity, latency constraints, high operational costs, and inconsistent behavior. To understand what it takes to ship production-grade agents, let's break down the key requirements one by one.
# Agents in Production
# Prosus Group
# Agentic Tools
Comment

Artem Yushkovskiy · Dec 10th, 2025
Stop thinking of `POST /predict` when someone says ""serving AI"". At Delivery Hero, we've rethought Gen AI infrastructure from the ground up, with async message queues, actor-model microservices, and zero-to-infinity autoscaling - no orchestrators, no waste, no surprising GPU bills. Here's the paradigm shift: treat every AI step as an independent async actor (we call them ""asyas""). Data ingestion? One asya. Prompt construction? Another. Smart model routing? Another. Pre-processing, analysis, backend logic, even agents — dozens of specialized actors coexist on the same GPU cluster and talk to each other, each scaling from zero to whatever capacity you need. The result? Dramatically lower GPU costs, true composability, and a maintainable system that actually matches how AI workloads behave. We'll show the evolution of our project - DAGs to distributed stateless async actors - and demonstrate how naturally this architecture serves real-world production needs. The framework is open-source as `Asya`. If time permits, we'll also discuss bridging these async pipelines with synchronous MCP servers when real-time responses are required. Come see why async isn't an optimization — it's a paradigm shift for AI infrastructure.
# Agents in Production
# Prosus Group
# AI Drift
Comment

