MLOps Community

Home

MLOps Community

The MLOps Community is where machine learning practitioners come together to define and implement MLOps. Our global community is the default hub for MLOps practitioners to meet other MLOps industry professionals, share their real-world experience and challenges, learn skills and best practices, and collaborate on projects and employment opportunities. We are the world's largest community dedicated to addressing the unique technical and operational challenges of production machine learning systems.

Events

4:00 PM - 5:00 PM, May 15 GMT
Coding Agents Lunch & Learn – Session 11
user's Avatar
user's Avatar
user's Avatar
Learn More
5:30 PM - 6:30 PM, May 27 GMT
Architecting Modern AI Systems: Platforms, Agents, and Integration

Content

Video
Rafael (Head of Innovation, iFood) and Daniel (Data and AI Manager, iFood) pull back the curtain on ILO-Agent — iFood's conversational AI ordering system built for 200 million users across Latin America. Recorded live at AI House Amsterdam, this conversation goes deep on the engineering and product decisions behind building recommendation systems, agentic AI, and why the speed of your AI's response might actually be destroying user trust.
May 12th, 2026 | Views 8
Video
Before MCP was a standard and before LangChain was widely adopted, his team had already shipped their own orchestration layer and tool protocol in production. This conversation is a rare look at what it takes to build an agentic system that actually books trips, runs on WhatsApp, and keeps adding capabilities without falling over.
May 11th, 2026 | Views 6
Blog
Subtitle: It’s a feature of the architecture Summary: Hallucination in LLMs is not a data quality problem. It is not a training problem. It is not a problem you can solve with more [RLHF](https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedbac), better filtering, or a larger context window. **It is a structural property of what these systems are optimized to do.** I have held this position for months, and the reaction is predictable: researchers working on retrieval augmentation, fine-tuning pipelines, and alignment techniques would prefer a more optimistic framing. I understand why. What has been missing from this argument is geometry. Intuition about objectives and architecture is necessary but not sufficient. We need to open the model and look at what is actually happening inside when a system produces a confident wrong answer. Not at the logits. Not at the attention patterns. At the internal trajectory of the representation itself, layer by layer, from input to output. That is what the work I am presenting here did.
May 5th, 2026 | Views 66
Code of Conduct
Your Privacy Choices