MLOps Community

Collections

All Collections

Agents in Production - Prosus x MLOps
31 items

All Content

All Tags
All Types
Demetrios Brinkmann
Demetrios Brinkmann · May 13th, 2026
MLOps Community 2.0
MLOps Community is joining the Linux Foundation as the official user group of the Agentic AI Foundation. The community continues, with more support behind the events, newsletter, podcast, and practitioner conversations.
# MLOps Community
# AAIF
# Linux Foundation
Rafael Borger
Daniel Wolbert
Demetrios Brinkmann
Rafael Borger, Daniel Wolbert & Demetrios Brinkmann · May 12th, 2026
Rafael (Head of Innovation, iFood) and Daniel (Data and AI Manager, iFood) pull back the curtain on ILO-Agent — iFood's conversational AI ordering system built for 200 million users across Latin America. Recorded live at AI House Amsterdam, this conversation goes deep on the engineering and product decisions behind building recommendation systems, agentic AI, and why the speed of your AI's response might actually be destroying user trust.
# Conversational AI
# iFood
# AI Agents
# Prosus Group
Nicolás Alejandro  Bogliolo
Demetrios Brinkmann
Nicolás Alejandro Bogliolo & Demetrios Brinkmann · May 11th, 2026
Before MCP was a standard and before LangChain was widely adopted, his team had already shipped their own orchestration layer and tool protocol in production. This conversation is a rare look at what it takes to build an agentic system that actually books trips, runs on WhatsApp, and keeps adding capabilities without falling over.
# Agentic AI
# MCP
# Ai agents
Subtitle: It’s a feature of the architecture Summary: Hallucination in LLMs is not a data quality problem. It is not a training problem. It is not a problem you can solve with more [RLHF](https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedbac), better filtering, or a larger context window. **It is a structural property of what these systems are optimized to do.** I have held this position for months, and the reaction is predictable: researchers working on retrieval augmentation, fine-tuning pipelines, and alignment techniques would prefer a more optimistic framing. I understand why. What has been missing from this argument is geometry. Intuition about objectives and architecture is necessary but not sufficient. We need to open the model and look at what is actually happening inside when a system produces a confident wrong answer. Not at the logits. Not at the attention patterns. At the internal trajectory of the representation itself, layer by layer, from input to output. That is what the work I am presenting here did.
# AI Hallucination
# Artificial Intelligence
# Deep Learning
# Editor's Pick
# LLM
Anurag Beniwal
Demetrios Brinkmann
Anurag Beniwal & Demetrios Brinkmann · May 1st, 2026
Anurag Beniwal (Member of Technical Staff at ElevenLabs) breaks down the real-world challenges of building voice agents—from latency, transcription accuracy, and turn-taking to the tradeoffs between cascaded systems and end-to-end speech models. The conversation explores why production systems rely on “constellations” of models, how to design for non-technical users (especially in customer support), and why voice unlocks richer context—but introduces far more complexity than chat. Ultimately, it’s a deep dive into making voice AI practical, reliable, and usable at scale.
# Voice
# AI Agents
# Customer Support AI
# Amazon
A deep dive into the practical limitations of agent protocols like MCP and A2A for low-level tasks, and why the "Linux philosophy" of using a raw command-line interface provides a more lightweight, composable alternative for local development, paving the way for an Agent OS.
# Artificial Intelligence
# Software Engineering
# LLM
# AI Agent
# Software Development
Jesse Vincent
Demetrios Brinkmann
Jesse Vincent & Demetrios Brinkmann · Apr 24th, 2026
Jesse Vincent breaks down how modern “agentic” software development is shifting from writing code to managing intelligent systems. He shares how his Superpowers toolkit uses structured workflows, skills, and subagents to turn vague ideas into executable plans—emphasizing that clarity of intent matters more than coding itself. The conversation explores how AI agents can be guided using psychology, why separating roles (planner, implementer, reviewer) leads to better outcomes, and how iteration—not perfection—builds powerful workflows. Ultimately, the future of software isn’t code—it’s specs, judgment, and orchestrating agents to do the work.
# Superpowers
# Claude Code
# Developer Tools
Maggie Konstanty
Demetrios Brinkmann
Maggie Konstanty & Demetrios Brinkmann · Apr 21st, 2026
Most teams treat evals like a last-minute checkbox—ship first, panic later—but that’s exactly backwards. The real edge comes from treating evals as a continuous, evolving system from day one, not a static test suite. Because here’s the uncomfortable truth: LLMs don’t fail cleanly or consistently, and neither do your users. If you’re not constantly adapting how you evaluate, you’re basically flying blind—just with more features to hide it.
# AI Evals
# LLM Evaluation
# AI Product Management
The 5xP Framework is a practical strategy that uses five targeted Markdown files (Product, Platform, Process, Profile, and Principle) to seamlessly align AI coding assistants with your project's architecture and business goals. By defining strict context boundaries, this framework drastically reduces prompt bloat and prevents AI hallucinations, moving developers away from unstructured "vibe coding" and closer to reliable, spec-driven development.
# Artificial Intelligence
# Software Engineering
# Productivity
# AI Agent
# Coding
Zach Lloyd
Demetrios Brinkmann
Zach Lloyd & Demetrios Brinkmann · Apr 17th, 2026
# AI Agents
# Cloud Development
# Warp Terminal
Code of Conduct
Your Privacy Choices