Collections
All Collections
All Content



Chiara Caratelli, Alex Salazar & Demetrios Brinkmann · Dec 23rd, 2025
Agents sound smart until millions of users show up. A real talk on tools, UX, and why autonomy is overrated.
# Prompt Engineering
# AI Agents
# AI Engineer
# AI agents in production
# AI agent usecase
# system design

Vishakha Gupta · Dec 23rd, 2025
As AI applications move beyond rows and columns into images, video, embeddings, and graphs, traditional query languages like SQL and Cypher start to crack. This post explains why ApertureDB chose to design a JSON-based query language from scratch—one built for multimodal search, data processing, and scale. By aligning with how modern AI systems already communicate (JSON, agents, workflows, and natural language), ApertureDB avoids brittle joins, performance tradeoffs, and DIY pipelines, while still offering SQL and SPARQL wrappers for familiarity. The result is a layered, future-proof way to query, process, and explore multimodal data without forcing old abstractions onto new problems.
# Multimodal/Generative AI
# Usability and Debugging


Jonathan Wall & Demetrios Brinkmann · Dec 19th, 2025
Everyone’s arguing about agents. Jonathan Wall says the real fight is about sandboxes, isolation, and why most “agent platforms” are doing it wrong.
# AI Agents
# Sandboxes
# Runloop.AI

Rosemary Nwosu-Ihueze · Dec 19th, 2025
MCP lets your agents connect to Slack, GitHub, your database, and whatever else you throw at it. Great for productivity. Terrible for security. When an agent can call any tool through any protocol, you've got a problem: who's actually making the request? What can they access? And when something breaks—or gets exploited—how do you even trace it back? This talk covers what breaks when agents go multi-protocol: authentication that doesn't account for agent delegation, permission models designed for humans not bots, and audit trails that disappear when Agent A spawns Agent B to call Tool C. I'll walk through real attack scenarios—prompt injection leading to unauthorized API calls, credential leakage across protocol boundaries, and privilege escalation through tool chaining. Then we'll dig into what actually works: identity verification at protocol boundaries, granular permissions that follow context, not just credentials, and audit systems built for non-human actors. You'll leave knowing how to implement MCP without turning your agent system into an attack surface, and what to build (or demand from vendors) to keep agent-to-tool communication secure.
# Agents in Production
# Prosus Group
# MCP Security
Comment


Simba Khadder & Demetrios Brinkmann · Dec 16th, 2025
Feature stores aren’t dead — they were just misunderstood.
Simba Khadder argues the real bottleneck in agents isn’t models, it’s context, and why Redis is quietly turning into an AI data platform. Context engineering matters more than clever prompt hacks.
# Context Engineering
# Featureform
# Redis

Médéric Hurier · Dec 16th, 2025
The traditional centralized data platform, characterized by rigid data warehouses and complex ETL pipelines, creates technical bottlenecks that severely slow down the delivery of business insights, forcing decision-makers to wait for overburdened data engineering teams. The open-source prototype Da2a proposes a radical new paradigm: a distributed, agentic ecosystem where specialized, autonomous agents (e.g., Marketing, E-commerce) manage their own domain data and collaborate via an Agent-to-Agent (A2A) protocol to answer complex, cross-domain queries. Instead of focusing on the engineering of data movement and storage, this approach is insight-focused, allowing an orchestrator agent to plan and delegate tasks, abstracting underlying complexity and enabling greater scalability, extensibility, and alignment with high-level business logic—a critical evolution for MLOps engineers looking to build more flexible and responsive data foundations.
# Generative AI Tools
# Artificial Intelligence
# Machine Learning
# Data Science
# AI Agent


Satish Bhambri & Demetrios Brinkmann · Dec 12th, 2025
Satish Bhambri is a Sr Data Scientist at Walmart Labs, working on large-scale recommendation systems and conversational AI, including RAG-powered GroceryBot agents, vector-search personalization, and transformer-based ad relevance models.
# AgenticRAG
# AI Engineer
# AI Agents

Tom Kaltofen · Dec 11th, 2025
Modern AI agents depend on vast amounts of context, data, features, and intermediate states, to make correct decisions. In practice, this context is often tied to specific datasets or infrastructure, leading to brittle pipelines and unpredictable behaviour when agents move from prototypes to production. This talk introduces mloda, an open‑source Python framework that makes data, feature, and context engineering shareable. By separating what you compute from how you compute it, mloda provides the missing abstraction layer for AI pipelines, allowing teams to build deterministic context layers that agents can rely on. Attendees will learn how mloda's plugin‑based architecture (minimal dependencies, BYOB design) enables clean separation of transformation logic from execution environments. We'll explore how built‑in input/output validation and test‑driven development will help you build strong contexts. The session will demonstrate how mloda can generate production‑ready data flows. Real‑world examples will show how mloda enables deterministic context layers from laptop prototypes to cloud deployments.
# Agents in Production
# Prosus Group
# Context Layers


Zack Reneau-Wedeen & Demetrios Brinkmann · Dec 10th, 2025
Sierra’s Zack Reneau-Wedeen claims we’re building AI all wrong and that “context engineering,” not bigger models, is where the real breakthroughs will come from. In this episode, he and Demetrios Brinkmann unpack why AI behaves more like a moody coworker than traditional software, why testing it with real-world chaos (noise, accents, abuse, even bad mics) matters, and how Sierra’s simulations and model “constellations” aim to fix the industry’s reliability problems. They even argue that decision trees are dead replaced by goals, guardrails, and speculative execution tricks that make voice AI actually usable. Plus: how Sierra trains grads to become product-engineering hybrids, and why obsessing over customers might be the only way AI agents stop disappointing everyone.
# AI Systems
# Agent Simulations
# AI Voice Agent

Kopal Garg · Dec 10th, 2025
Everyone obsesses over models, but NVIDIA’s stack makes it obvious: the real power move is owning everything around the model. NeMo trains it, RAPIDS cleans it, TensorRT speeds it up, Triton serves it, Operators manage it — and the hardware seals the deal.
It’s less a toolkit and more a gravity well for your entire GenAI pipeline. Once you’re in, good luck escaping.
# Generative AI
# AI Frameworks
# NVIDIA

