MLOps Community
+00:00 GMT
MLOps Community Podcast
# LLMs
# AI
# Kleiner Perkins

AI's Next Frontier

LLMs have ushered in an unmistakable supercycle in the world of technology. The low-hanging use cases have largely been picked off. The next frontier will be AI coworkers who sit alongside knowledge workers, doing work side by side. At the infrastructure level, one of the most important primitives invented by man - the data center, is being fundamentally rethought in this new wave.
Aditya Naganath
Demetrios Brinkmann
Aditya Naganath & Demetrios Brinkmann · Dec 10th, 2024
Popular topics
# Machine Learning
# MLops
# LLMs
# AI
# LLM
# Machine learning
# Deployment
# Observability
# Artificial Intelligence
# GetYourGuide
# RAG
# LatticeFlow
# GenAI
# Interview
# Monitoring
# ML Platform
# Synthetic Data
# A/B Testing
# Python
# Video Game
All
Vincent  Moens
Demetrios Brinkmann
Vincent Moens & Demetrios Brinkmann · Dec 3rd, 2024
PyTorch is widely adopted across the machine learning community for its flexibility and ease of use in applications such as computer vision and natural language processing. However, supporting reinforcement learning, decision-making, and control communities is equally crucial, as these fields drive innovation in areas like robotics, autonomous systems, and game-playing. This podcast explores the intersection of PyTorch and these fields, covering practical tips and tricks for working with PyTorch, an in-depth look at TorchRL, and discussions on debugging techniques, optimization strategies, and testing frameworks. By examining these topics, listeners will understand how to effectively use PyTorch for control systems and decision-making applications.
# PyTorch
# Control Systems and Decision Making
# Meta
55:26
Matt van Itallie
Demetrios Brinkmann
Matt van Itallie & Demetrios Brinkmann · Nov 29th, 2024
Matt Van Itallie, founder and CEO of Sema, discusses how comprehensive codebase evaluations play a crucial role in MLOps and technical due diligence. He highlights the impact of Generative AI on code transparency and explains the Generative AI Bill of Materials (GBOM), which helps identify and manage risks in AI-generated code. This talk offers practical insights for technical and non-technical audiences, showing how proper diligence can enhance value and mitigate risks in machine learning operations.
# Due Diligence
# Transparency
# Sema
57:02
Michael Gschwind
Demetrios Brinkmann
Michael Gschwind & Demetrios Brinkmann · Nov 26th, 2024
Explore the role in boosting model performance, on-device AI processing, and collaborations with tech giants like ARM and Apple. Michael shares his journey from gaming console accelerators to AI, emphasizing the power of community and innovation in driving advancements.
# PyTorch
# Torch Chat
# Meta Platforms
57:44
Luke Marsden
Demetrios Brinkmann
Luke Marsden & Demetrios Brinkmann · Nov 20th, 2024
In this podcast episode, Luke Marsden explores practical approaches to building Generative AI applications using open-source models and modern tools. Through real-world examples, Luke breaks down the key components of GenAI development, from model selection to knowledge and API integrations, while highlighting the data privacy advantages of open-source solutions.
# AI Specs
# Accessible AI
# HelixML
54:31
Petar  Tsankov
Demetrios Brinkmann
Petar Tsankov & Demetrios Brinkmann · Nov 1st, 2024
Dive into AI risk and compliance. Petar Tsankov, a leader in AI safety, talks about turning complex regulations into clear technical requirements and the importance of benchmarks in AI compliance, especially with the EU AI Act. We explore his work with big AI players and the EU on safer, compliant models, covering topics from multimodal AI to managing AI risks. He also shares insights on COMPL-AI, an open-source tool for checking AI models against EU standards, making compliance simpler for AI developers. A must-listen for those tackling AI regulation and safety.
# EU AI Act
# AI regulation and safety
# LatticeFlow
58:01
Limited memory capacity hinders the performance and potential of research and production environments utilizing Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) techniques. This discussion explores how leveraging industry-standard CXL memory can be configured as a secondary, composable memory tier to alleviate this constraint. We will highlight some recent work we’ve done in integrating this novel class of memory into LLM/RAG/vector database frameworks and workflows. Disaggregated shared memory is envisioned to offer high-performance, low-latency caches for model/pipeline checkpoints of LLM models, KV caches during distributed inferencing, LORA adaptors, and in-process data for heterogeneous CPU/GPU workflows. We expect to showcase these types of use cases in the coming months.
# Memory
# Checkpointing
# MemVerge
55:19
Gideon Mendels
Demetrios Brinkmann
Gideon Mendels & Demetrios Brinkmann · Oct 18th, 2024
When building LLM Applications, Developers need to take a hybrid approach from both ML and SW Engineering best practices. They need to define eval metrics and track their entire experimentation to see what is and is not working. They also need to define comprehensive unit tests for their particular use case so they can confidently check if their LLM App is ready to be deployed.
# LLMs
# Engineering best practices
# Comet ML
1:01:43
In this MLOps Community podcast, Demetrios chats with Raj Rikhy, Principal Product Manager at Microsoft, about deploying AI agents in production. They discuss starting with simple tools, setting clear success criteria, and deploying agents in controlled environments for better scaling. Raj highlights real-time uses like fraud detection and optimizing inference costs with LLMs while stressing human oversight during early deployment to manage LLM randomness. The episode offers practical advice on deploying AI agents thoughtfully and efficiently, avoiding over-engineering and integrating AI into everyday applications.
# AI agents in production
# LLMs
# AI
49:13
Jelmer Borst
Daniela Solis
Demetrios Brinkmann
Jelmer Borst, Daniela Solis & Demetrios Brinkmann · Oct 8th, 2024
Like many companies, Picnic started out with a small, central data science team. As this grows larger, focussing on more complex models, it questions the skillsets & organisational set up. Use an ML platform, or build ourselves? A central team vs. embedded? Hire data scientists vs. ML engineers vs. MLOps engineers How to foster a team culture of end-to-end ownership How to balance short-term & long-term impact
# Recruitment
# Growth
# Picnic
57:50
Francisco Ingham
Demetrios Brinkmann
Francisco Ingham & Demetrios Brinkmann · Oct 4th, 2024
Being an LLM-native is becoming one of the key differentiators among companies, in vastly different verticals. Everyone wants to use LLMs, and everyone wants to be on top of the current tech but - what does it really mean to be LLM-native? LLM-native involves two ends of a spectrum. On the one hand, we have the product or service that the company offers, which surely offers many automation opportunities. LLMs can be applied strategically to scale at a lower cost and offer a better experience for users. But being LLM-native not only involves the company's customers, it also involves each stakeholder involved in the company's operations. How can employees integrate LLMs into their daily workflows? How can we as developers leverage the advancements in the field not only as builders but as adopters? We will tackle these and other key questions for anyone looking to capitalize on the LLM wave, prioritizing real results over the hype.
# LLM-native
# RAG
# Pampa Labs
56:14
Popular
A Decade of AI Safety and Trust
Petar Tsankov & Demetrios Brinkmann