MLOps Community
+00:00 GMT
MLOps Community Podcast
# Anti-AI hype
# AI Operations
# Hermit Tech

AI Operations Without Fundamental Engineering Discipline

Nik joins the podcast to discuss an interesting trend: the anti-AI hype. Dive into what many companies might be missing when non-technical management rushes to roll out machine learning initiatives without bringing on the technical experts who can set them up for success. It's great to converse about bridging the gap between management and tech to make AI work effectively!
Nikhil Suresh
Demetrios Brinkmann
Nikhil Suresh & Demetrios Brinkmann · Jul 23rd, 2024
Popular topics
# MLops
# Interview
# Machine Learning
# AI
# Machine learning
# Open Source
# Artificial Intelligence
# LLM in Production
# Generative AI
# about.gitlab.com
# Monitoring
# Case Study
# Model Serving
# FinTech
# Cultural Side
# Scaling
# Analytics
# ML Platform
# Responsible AI
# GPU
All
Eric Landry
Demetrios Brinkmann
Eric Landry & Demetrios Brinkmann · Jul 19th, 2024
Eric Landry discusses the integration of AI in healthcare, highlighting use cases like patient engagement through chatbots and managing medical data. He addresses benchmarking and limiting hallucinations in LLMs, emphasizing privacy concerns and data localization. Landry maintains a hands-on approach to developing AI solutions and navigating the complexities of healthcare innovation. Despite necessary constraints, he underscores the potential for AI to proactively engage patients and improve health outcomes.
# Healthcare
# Continual.ai
# Zeteo Health
51:06
Aniket Kumar
Demetrios Brinkmann
Aniket Kumar & Demetrios Brinkmann · Jul 16th, 2024
Dive into the world of Large Language Models (LLMs) like GPT-4. Why is it crucial to evaluate these models, how we measure their performance, and the common hurdles we face? Drawing from Aniket's research, he shares insights on the importance of prompt engineering and model selection. Aniket also discusses real-world applications in healthcare, economics, and education, and highlights future directions for improving LLMs.
# LLMs
# Evaluation
# MyEvaluationPal
# Ultium Cells
35:41
Sophia Rowland
David Weik
Demetrios Brinkmann
Sophia Rowland, David Weik & Demetrios Brinkmann · Jul 12th, 2024
Organizations worldwide invest hundreds of billions into AI, but they do not see a return on their investments until they are able to leverage their analytical assets and models to make better decisions. At SAS, we focus on optimizing every step of the Data and AI lifecycle to get high-performing models into a form and location where they drive analytically driven decisions. Join experts from SAS as they share learnings and best practices from implementing MLOps and LLMOPs at organizations across industries, around the globe, and using various types of models and deployments, from IoT CV problems to composite flows that feature LLMs.
# AI
# Innovation
# SAS
1:01:37
Matar Haller
Demetrios Brinkmann
Matar Haller & Demetrios Brinkmann · Jul 9th, 2024
One of the biggest challenges facing online platforms today is detecting harmful content and malicious behavior. Platform abuse poses brand and legal risks, harms the user experience, and often represents a blurred line between online and offline harm. So how can online platforms tackle abuse in a world where bad actors are continuously changing their tactics and developing new ways to avoid detection?
# Harmful Content
# AI
# ActiveFence
51:28
Catherine Nelson
Demetrios Brinkmann
Catherine Nelson & Demetrios Brinkmann · Jul 5th, 2024
Data scientists have a reputation for writing bad code. This quote from Reddit sums up how many people feel: “It's honestly unbelievable and frustrating how many Data Scientists suck at writing good code.” But as data science projects grow, and because the job now often includes deploying ML models, it's increasingly important for DSs to learn fundamental SWE principles such as keeping your code modular, making sure it is readable by others, and so on. The exploratory nature of DS projects means that you can't be sure where you will end up at the start of a project, but there's still a lot you can do to standardize the code you write.
# Data Scientist
# Software Engineering Principles
# Coding
52:54
Matthew McClean
Kamran Khan
Matthew McClean & Kamran Khan · Jun 4th, 2024
Unlock unparalleled performance and cost savings with AWS Trainium and Inferentia! These powerful AI accelerators offer MLOps community members enhanced availability, compute elasticity, and energy efficiency. Seamlessly integrate with PyTorch, JAX, and Hugging Face, and enjoy robust support from industry leaders like W&B, Anyscale, and Outerbounds. Perfectly compatible with AWS services like Amazon SageMaker, getting started has never been easier. Elevate your AI game with AWS Trainium and Inferentia!
# Accelarators
# AWS Tranium
# AWS Inferentia
# aws.amazon.com
45:23
Benjamin Wilms
Benjamin Wilms · May 31st, 2024
How to build reliable systems under unpredictable conditions with Chaos Engineering.
# Chaos Engineering
# MLOps
# Steadybit
46:58
Tom Smoker
Demetrios Brinkmann
Tom Smoker & Demetrios Brinkmann · May 28th, 2024
RAG is one of the more popular use cases for generative models, but there can be issues with repeatability and accuracy. This is especially applicable when it comes to using many agents within a pipeline, as the uncertainty propagates. For some multi-agent use cases, knowledge graphs can be used to structurally ground the agents and selectively improve the system to make it reliable end to end.
# Knowledge Graphs
# Generative AI
# RAG
# Whyhow.ai
1:04:41
Dave Nunez
Jakub Czakon
Demetrios Brinkmann
Dave Nunez, Jakub Czakon & Demetrios Brinkmann · May 24th, 2024
Over the previous decade, the recipe for making excellent software docs mostly converged on a set of core goals: Create high-quality, consistent content Use different content types depending on the task Make the docs easy to find For AI-focused software and products, the entire developer education playbook needs to be rewritten.
# Software Docs
# AI
# Abstract Group
1:01:39
Cody Peterson
Demetrios Brinkmann
Cody Peterson & Demetrios Brinkmann · May 21st, 2024
MLOps is fundamentally a discipline of people working together on a system with data and machine learning models. These systems are already built on open standards we may not notice -- Linux, git, scikit-learn, etc. -- but are increasingly hitting walls with respect to the size and velocity of data. Pandas, for instance, is the tool of choice for many Python data scientists -- but its scalability is a known issue. Many tools make the assumption of data that fits in memory, but most organizations have data that will never fit in a laptop. What approaches can we take? One emerging approach with the Ibis project (created by the creator of pandas, Wes McKinney) is to leverage existing "big" data systems to do the heavy lifting on a lightweight Python data frame interface. Alongside other open source standards like Apache Arrow, this can allow data systems to communicate with each other and users of these systems to learn a single data frame API that works across any of them. Open standards like Apache Arrow, Ibis, and more in the MLOps tech stack enable freedom for composable data systems, where components can be swapped out allowing engineers to use the right tool for the job to be done. It also helps avoid vendor lock-in and keep costs low.
# MLOps
# Silos
# Voltrondata.com
# ibis-project.org
46:20
Popular
AWS Inferentia and AWS Trainium deliver lowest cost to deploy Llama 3 models in Amazon SageMaker JumpStart
Xin Huang, Rachna Chadha, Qing Lan & 5 more authors