MLOps Community
MLOps Reading Group February: Advancing Open-source World Models
MEETING

MLOps Reading Group February: Advancing Open-source World Models

# AI Agent
# Vibe Coding
# Control

Paper: Advancing Open-source World Models

We present LingBot-World, an open-sourced world simulator stemming from video generation. Positioned as a top-tier world model, LingBot-World offers the following features. (1) It maintains high fidelity and robust dynamics in a broad spectrum of environments, including realism, scientific contexts, cartoon styles, and beyond. (2) It enables a minute-level horizon while preserving contextual consistency over time, which is also known as "long-term memory". (3) It supports real-time interactivity, achieving a latency of under 1 second when producing 16 frames per second. We provide public access to the code and model in an effort to narrow the divide between open-source and closed-source technologies. We believe our release will empower the community with practical applications across areas like content creation, gaming, and robot learning.

The discussion will be led by a panel of experts who will break down the paper’s key findings, challenge assumptions, and connect the insights to the modern developer experience and production-grade AI systems.

Date: Thursday, February 26

Time: 11:00 AM – 12:00 PM (Eastern Time)


Please note that the session will be held on Gradual, the MLOps community virtual event platform, and not on Zoom. Please register here on Gradual to get the invite.


Speakers

Adam Becker
IRL @ MLOps Community
Valdimar Eggertsson
AI Development Team Lead @ Snjallgögn (Smart Data inc.)
Arthur Coleman
CEO @ Online Matters

Agenda

From4:00 PM
To5:00 PM
GMT
Tags:
Reading Group
Beyond Entangled Planning: Task-Decoupled Planning for Long-Horizon Agents

Recent advances in large language models (LLMs) have enabled agents to autonomously execute complex, long-horizon tasks, yet planning remains a primary bottleneck for reliable task execution. Existing methods typically fall into two paradigms: step-wise planning, which is reactive but often short-sighted; and one-shot planning, which generates a complete plan upfront yet is brittle to execution errors. Crucially, both paradigms suffer from entangled contexts, where the agent must reason over a monolithic history spanning multiple sub-tasks. This entanglement increases cognitive load and lets local errors propagate across otherwise independent decisions, making recovery computationally expensive. To address this, we propose Task-Decoupled Planning (TDP), a training-free framework that replaces entangled reasoning with task decoupling. TDP decomposes tasks into a directed acyclic graph (DAG) of sub-goals via a Supervisor. Using a Planner and Executor with scoped contexts, TDP confines reasoning and replanning to the active sub-task. This isolation prevents error propagation and corrects deviations locally without disrupting the workflow. Results on TravelPlanner, ScienceWorld, and HotpotQA show that TDP outperforms strong baselines while reducing token consumption by up to 82%, demonstrating that sub-task decoupling improves both robustness and efficiency for long-horizon agents.

The discussion will be led by a panel of experts who will break down the paper’s key findings, challenge assumptions, and connect the insights to the modern developer experience, and production-grade AI systems.

+ Read More
Speakers:
user's Avatar
user's Avatar
user's Avatar

Attendees

Bessie's Avatar
Bessie's Avatar
Bessie
member
Arlene's Avatar
Arlene's Avatar
Arlene
member
Cody's Avatar
Cody's Avatar
Cody
member
Colleen's Avatar
Colleen's Avatar
Colleen
member
Kathryn's Avatar
Kathryn's Avatar
Kathryn
member
Bessie's Avatar
Bessie's Avatar
Bessie
member
Already registered?
Starting in 8 days
February 26, 4:00 PM GMT
Online
Starting in 8 days
February 26, 4:00 PM GMT
Online
Code of Conduct