Collections
All Collections
All Content

Subham Kundu · Mar 24th, 2026
AI coding platforms work best when you treat the AI as a junior engineer, not a replacement for your thinking. Break problems into small tasks, plan in Markdown before coding, and keep your context window lean - accuracy drops sharply past 50% capacity. Never debug in the same chat where you built the feature; the AI is biased by its own logic. For existing codebases, reference well-written code as examples. For new projects, define strict guardrails early - without them, the AI makes hundreds of arbitrary decisions that compound into a mess. The blog dives deep into all the patterns that work, the anti-patterns that silently kill your codebase, and strategies for both brownfield and greenfield projects - each illustrated with detailed diagrams. You stay the architect; the AI executes.
# AI Coding
# Software Engineering
# AI Assistants



Donné Stevenson, Pedro Chaves & Demetrios Brinkmann · Mar 20th, 2026
Marketplaces are about to get weird.
With Pedro Chaves and Donné Stevenson: agents picking your house, negotiating deals, even talking to other agents for you.
Less browsing. Less choice. More automation.
Convenience… or giving up control?
# AI Agents
# Marketplace
# Prosus
# OLX


Johann Schleier-Smith & Demetrios Brinkmann · Mar 17th, 2026
A new paradigm is emerging for building applications that process large volumes of data, run for long periods of time, and interact with their environment. It’s called Durable Execution and is replacing traditional data pipelines with a more flexible approach.
Durable Execution makes regular code reliable and scalable.
In the past, reliability and scalability have come from restricted programming models, like SQL or MapReduce, but with Durable Execution this is no longer the case. We can now see data pipelines that include document processing workflows, deep research with LLMs, and other complex and LLM-driven agentic patterns expressed at scale with regular Python programs.
In this session, we describe Durable Execution and explain how it fits in with agents and LLMs to enable a new class of machine learning applications.
# AI Agents
# AI Engineer
# AI agents in production
# AI agent usecase
# System Design

Médéric Hurier · Mar 17th, 2026
Chaigent combines Chainlit and Vertex AI to deliver a code-first, serverless AI agent platform that avoids costly per-seat licensing fees. It empowers developers to build highly customizable, enterprise-grade agents using a scalable pay-as-you-go architecture.
# Artificial Intelligence
# AI Agent
# Generative AI Tools
# Google Cloud Platform
# Data Science

Médéric Hurier · Mar 10th, 2026
mAIdAI is a lightweight personal AI assistant built with Google Chat, Cloud Run, and Vertex AI, designed to automate repetitive micro-tasks. By grounding the model with a local markdown context file, it provides highly personalized workflow assistance directly within your chat environment.
# Generative AI Tools
# Artificial Intelligence
# AI Agent
# Programming
# Automation

Médéric Hurier · Mar 3rd, 2026
This article explores how to use "Agent Skills"—simple Markdown-based context modules—to ensure AI agents strictly adhere to your team's MLOps practices and tooling preferences. By providing explicit organizational rules upfront, developers can eliminate generic boilerplate and align AI-generated code with production-grade standards.
# MLOps
# AI Agent
# Software Engineering
# Generative AI Tools
# Coding



Valdimar Eggertsson, Adam Becker & Arthur Coleman · Feb 27th, 2026
We present LingBot-World, an open-sourced world simulator stemming from video generation. Positioned as a top-tier world model, LingBot-World offers the following features. (1) It maintains high fidelity and robust dynamics in a broad spectrum of environments, including realism, scientific contexts, cartoon styles, and beyond. (2) It enables a minute-level horizon while preserving contextual consistency over time, which is also known as "long-term memory". (3) It supports real-time interactivity, achieving a latency of under 1 second when producing 16 frames per second. We provide public access to the code and model in an effort to narrow the divide between open-source and closed-source technologies. We believe our release will empower the community with practical applications across areas like content creation, gaming, and robot learning.
# Coding Agents
# Open Source World Models
# LinkBot World


Chris Fregly & Demetrios Brinkmann · Feb 24th, 2026
In today’s era of massive generative models, it's important to understand the full scope of AI systems' performance engineering. This talk discusses the new O'Reilly book, AI Systems Performance Engineering, and the accompanying GitHub repo (https://github.com/cfregly/ai-performance-engineering).
This talk provides engineers, researchers, and developers with a set of actionable optimization strategies. You'll learn techniques to co-design and co-optimize hardware, software, and algorithms to build resilient, scalable, and cost-effective AI systems for both training and inference.
# NVIDIA GPUs
# CUDA framework
# GitHub repo

Axel Mendoza · Feb 24th, 2026
A hands-on beginner roadmap for learning Kubernetes, designed to walk you through core concepts (like clusters, pods, services, deployments, storage, RBAC, autoscaling, etc.) with simple explanations, CLI examples, and practical exercises. By following it you build real experience and are prepared to use Kubernetes locally or on cloud platforms like GKE or EKS.
# DevOps
# Kubernetes
# From Scratch



Ioana Apetrei, Igor Šušić & Adam Becker · Feb 19th, 2026
Experimenting with LLMs is easy. Running them reliably and cost-effectively in production is where things break.
Most AI teams never make it past demos and proofs of concept. A smaller group is pushing real workloads to production—and running into very real challenges around infrastructure efficiency, runaway cloud costs, and reliability at scale.
This session is for engineers and platform teams moving beyond experimentation and building AI systems that actually hold up in production.
# AI Applications
# GPU Orchestration
# Kubernetes Clusters
# CAST AI
