MLOps Community
+00:00 GMT
MLOps Community Podcast
# GenAI
# EU AI Act
# AI

GenAI in Production - Challenges and Trends

The goal of this talk is to provide insights into challenges for Generative AI in production as well as trends aiming to solve some of these challenges. The challenges and trends Verena sees are: Model size and moving towards a mixture of expert architectures context window - breakthroughs for context lengths from unimodality to multimodality, next step large action models? regulation in the form of the EU AI Act Verena uses the differences between Gemini 1.0 and Gemini 1.5 to exemplify some of these trends.
Verena Weber
Demetrios Brinkmann
Verena Weber & Demetrios Brinkmann · Apr 17th, 2024
Popular topics
# Machine Learning
# Interview
# LLMs
# Presentation
# Case Study
# Panel
# Investing
# Open Source
# Cultural Side
# Responsible AI
# DataOps
# Software Engineering Challenge
# Maersk
# Observability
# Brix
# Analytic Assets
# QuantumBlack
# Python
# Pandas
Amritha Arun Babu
Abhik Choudhury
Demetrios Brinkmann
Amritha Arun Babu, Abhik Choudhury & Demetrios Brinkmann · Mar 29th, 2024

MLOps - Design Thinking to Build ML Infra for ML and LLM Use Cases

As machine learning (ML) and large language models (LLMs) continue permeating industries, robust ML infrastructure and operations (ML Ops) are crucial to deploying these AI systems successfully. This podcast discusses best practices for building reusable, scalable, and governable ML Ops architectures tailored to ML and LLM use cases.
# MLOps
# ML Infra
# LLM Use Cases
# Klaviyo
Bandish Shah
Davis Blalock
Demetrios Brinkmann
Bandish Shah, Davis Blalock & Demetrios Brinkmann · Mar 22nd, 2024

The Art and Science of Training LLMs

What's hard about language models at scale? Turns out...everything. MosaicML's Davis and Bandish share war stories and lessons learned from pushing the limits of LLM training and helping dozens of customers get LLMs into production. They cover what can go wrong at every level of the stack, how to make sure you're building the right solution, and some contrarian takes on the future of efficient models.
# LLMs
# MosaicML
# Databricks
Petar  Tsankov
Demetrios Brinkmann
Petar Tsankov & Demetrios Brinkmann · Mar 12th, 2024

A Decade of AI Safety and Trust

Embark on a decade-long journey of AI safety and trust. This conversation delves into key areas such as the transition towards more adversarial environments, the challenges in model robustness and data relevance, and the necessity of third-party assessments in the face of companies' reluctance to share data. It further covers current shifts in AI trends, emphasizing problems associated with biases, errors, and lack of transparency, particularly in generative AI and third-party models. This episode explores the origins and mission of LatticeFlow AI to provide trusty solutions for new AI applications, encompassing their participation in safety competitions and their focus on proving the properties of neural networks. The profound conversation concludes by touching upon the importance of data quality, robustness checks, application of emerging standards like ISO 5259 and ISO 40001, and a peek into the future of AI regulation and certifications. Safe to say, it's a must-listen for anyone passionate about trust and safety in AI.
# GenAI
# Trust
# Safety
# LatticeFlow
Anu  Arora
Anass Bensrhir
Demetrios Brinkmann
Anu Arora, Anass Bensrhir & Demetrios Brinkmann · Mar 5th, 2024

Managing Data for Effective GenAI Application

Generative AI is poised to bring impact across all industries and business functions across industries While many companies pilot GenAI, only a few have deployed GenAI use cases, e.g., retailers are producing videos to answer common customer questions using ChatGPT. A majority of organizations are facing challenges to industrialize and scale, with data being one of the biggest inhibitors. Organizations need to strengthen their data foundations given that among leading organizations, 72% noted managing data among the top challenges preventing them from scaling impact. Furthermore, leaders noticed that +31% of their staff's time is spent on non-value-added tasks due to poor data quality and availability issues.
# Generative AI
# Data Foundations
# QuantumBlack
Alex Volkov
Demetrios Brinkmann
Alex Volkov & Demetrios Brinkmann · Mar 1st, 2024

Becoming an AI Evangelist

Follow Alex's journey into the world of AI, from being interested in running his first AI models to founding an AI startup, running a successful weekly AI news podcast & newsletter, and landing a job with Weights and Biases.
# AI Evangelist
# AI Startup
# Weights and Biases
Daniel Svonava
Demetrios Brinkmann
Daniel Svonava & Demetrios Brinkmann · Feb 24th, 2024

Information Retrieval & Relevance: Vector Embeddings for Semantic Search

In today's information-rich world, the ability to retrieve relevant information effectively is essential. This lecture explores the transformative power of vector embeddings, revolutionizing information retrieval by capturing semantic meaning and context. We'll delve into: - The fundamental concepts of vector embeddings and their role in semantic search - Techniques for creating meaningful vector representations of text and data - Algorithmic approaches for efficient vector similarity search and retrieval - Practical strategies for applying vector embeddings in information retrieval systems
# Semantic Search
Anish Shah
Morgan McGuire
Demetrios Brinkmann
Anish Shah, Morgan McGuire & Demetrios Brinkmann · Feb 21st, 2024

Evaluating and Integrating ML Models

Anish Shah and Morgan McGuire share insights on their journey into ML, the exciting work they're doing at Weights and Biases, and their thoughts on MLOps. They discuss using large language models (LLMs) for translation, pre-written code, and internal support. They discuss the challenges of integrating LLMs into products, the need for real use cases, and maintaining credibility. They also touch on evaluating ML models collaboratively and the importance of continual improvement. They emphasize understanding retrieval and balancing novelty with precision. This episode provides a deep dive into Weights and Biases' work with LLMs and the future of ML evaluation in MLOps. It's a must-listen for anyone interested in LLMs and ML evaluation.
# ML Models
# Evaluation
# Integration
# Weights and Biases
Alexandra Diem
Demetrios Brinkmann
Alexandra Diem & Demetrios Brinkmann · Feb 16th, 2024

Data Governance and AI

This recent session featuring the incredibly talented Alexandra Diem delves into the challenges of generative AI in sensitive data environments, the emergence of specialized chatbots, and data governance. Balancing high-tech projects with those offering significant business value, using agile methods, is also discussed. Alexandra's journey from academia to being a consultant in Norway is truly inspiring. The discussion explores the function of enabling and R&D in tech roles, the shift towards self-serve solutions, and the integration of AI into existing workflows. Stimulating conversations about future-oriented technologies married with sound data science and industry practices make this session a must-listen for anyone interested in machine learning operations!
# Data governance
# AI
# Gjensidige
Aayush Mudgal
Demetrios Brinkmann
Aayush Mudgal & Demetrios Brinkmann · Feb 13th, 2024

Ads Ranking Evolution at Pinterest

Listen to the lessons from the journey of scaling ads ranking at Pinterest using innovative machine learning algorithms and innovation in the ML platform. Learn how they transitioned from traditional logistic regressions to deep learning-based transformer models, incorporating sequential signals, multi-task learning, and transfer learning. Discover the hurdles Pinterest overcame and the insights they gained in this talk, as Aayush shares the transformation of ads ranking at Pinterest and the lessons learned along the way. Discover how ML Platform evolution is crucial for algorithmic advancements.
# Ads Ranking
# Machine Learning
# Pinterest
Aparna Dhinakaran
Demetrios Brinkmann
Aparna Dhinakaran & Demetrios Brinkmann · Feb 9th, 2024

LLM Evaluation with Arize AI's Aparna Dhinakaran // MLOps Podcast #210

Dive into the complexities of Language Model (LLM) evaluation, the role of the Phoenix evaluations library, and the importance of highly customized evaluations in software application. The discourse delves into the nuances of fine-tuning in AI, the debate between the use of open-source versus private models, and the urgency of getting models into production for early identification of bottlenecks. Then examine the relevance of retrieved information, output legitimacy, and the operational advantages of Phoenix in supporting LLM evaluations.
# LLM Evaluation
# MLOps
# Arize AI