MLOps Community
timezone
+00:00 GMT
MLOps Community Mini Summit
# Gen AI
# Molecule Discovery
# Call Center Applications
# QuantumBlack
# mckinsey.com/quantumblack

Innovative Gen AI Applications: Beyond Text // MLOps Mini Summit #5

Generative AI in Molecule Discovery Molecules are all around us, in our medicines, clothes, and even our food. Finding new molecules is crucial for better treatments, eco-friendly products, and saving the planet. Different industries have been using Machine Learning and AI to discover molecules, but now there's gen AI, which can enable further breakthroughs. During this talk, we explore some use cases where gen AI can make a big difference in finding new molecules. Optimizing Gen AI in Call Center Applications There are many great off-the-shelf gen AI models and tools available, however, using them in production often requires additional engineering effort. In this talk, we explore the challenges faced when building a gen AI use case for a call center application such as maximizing GPU utilization, speeding up the overall pipeline using parallelization and domain knowledge, and moving from POC to production.
Diana C. Montañes Mondragon
Nick Schenone
Diana C. Montañes Mondragon & Nick Schenone · Apr 17th, 2024
Popular topics
# Machine Learning
# Monitoring
# Presentation
# Interview
# LLM in Production
# Rungalileo.io
# Coding Workshop
# Case Study
# Panel
# Scaling
# Infrastructure
# Runway ML
# Video Machine Learning
# GPU
# Unstructure Data
# ML Workflow
# Galileo
# Ad Platforms
# Advertising Optimization
# Design ML
Latest
Popular
All
David Espejo
Fabio Grätz
Arno Hollosi
+1
David Espejo, Fabio Grätz, Arno Hollosi & 1 more speaker · Dec 6th, 2023

Scaling MLOps for Computer Vision

Flyte: A Platform for the Agile Development of AI Products The development of traditional apps differs from how AI products are built, primarily due to distinctions in inputs (program vs data) and process structure (sequential vs iterative). However, ML/Data Science teams could benefit from adopting established Software Engineering patterns; bridging the gap that frequently impedes the transition of ML applications to production. In this talk, David will introduce Flyte, an open-source platform that empowers ML/DS teams to collaborate effectively and expedite the delivery of production-grade AI applications. Flyte at Recogni Recogni develops high compute, low power, and low latency neural network inference processors with unmatched performance amongst automotive inference processing systems. Fabio will share how ML engineers and researchers at Recogni leverage Flyte as part of their internal developer platform to speed up machine learning experiments for Recogni’s ML-silicon co-design, to develop a state-of-the-art automotive perception stack, and to compress and mathematically convert these models for deployment. Lessons Learned from Running AI Models at Scale Blackshark.ai is analyzing satellite imagery of the whole planet with AI. In this talk we explore the lessons learned from training and executing AI models at scale. It touches upon challenges such as managing large datasets, ensuring model reliability, and maintaining system performance. We will also discuss the importance of efficient workflows, robust infrastructure, and the need for continuous monitoring and optimization.
# Scaling MLOps
# Computer Vision
# Union
Thomas Capelle
Boris Dayma
Jonathan Whitaker
+2
Thomas Capelle, Boris Dayma, Jonathan Whitaker & 2 more speakers · Nov 22nd, 2023

MLOps Community: LLMs Mini Summit

Deep Dive on LLM Fine-tune In his session, Thomas focuses on understanding the ins and outs of fine-tuning LLMs. We all have a lot of questions during the fine-tuning process. How do you prepare your data? How much data do you need? Do you need to use a high-level API, or can you do this in PyTorch? During this talk, we will try to answer these questions. Thomas will share some tips and tricks on his journey in the LLM fine-tuning landscape. What worked and what did not, and hopefully, you will learn from his experience and the mistakes he made. A Recipe for Training Large Language Models AI models have become orders of magnitude larger in the last few years. Training such large models presents new challenges, and has been mainly practiced in large companies. In this talk, we tackle best practices for training large models, from early prototype to production. What The Kaggle 'LLM Science Exam' Competition Can Teach Us About LLMs This competition challenged participants to submit a model capable of answering science-related multiple-choice questions. In doing so it provided a fruitful environment for exploring most of the key techniques and approaches being applied today by anyone building with LLMs. In this talk, we look at some key lessons that this competition can teach us. Do you really know what your model has learned? Leap Labs demonstrates how data-independent model evaluations represent a paradigm shift in the model development process. All through our dashboard’s beautiful Weights & Biases Weave integration.
# LLM Fine-tuning
# Large Language Models
# Weights and Biases
# Virtual Meetup
Pavol Bielik
David Garnitz
Ben Epstein
Pavol Bielik, David Garnitz & Ben Epstein · Nov 13th, 2023

Model Blind Spot Discovery for Better Models

Did you know that 87% of AI models never move from lab to production? In fact, this is one of the biggest challenges faced by machine learning teams today. Just because a model excels in a test environment doesn't ensure its success in the real world. Furthermore, as you deploy AI models to production, they often degrade over time, especially with incoming new data. So, how do you know what is causing your AI models to fail? And how do you fix these issues to improve model performance? Let's delve deep into these issues and provide insights on how you and your ML teams can systematically identify and fix model and data issues. Empowered by intelligent workflows, our end-to-end AI platform is built by ML engineers to enhance your model's performance across the entire AI lifecycle, all while unleashing the full potential of robust and trustworthy AI at scale.
# Model Blind Spot Discovery
# LatticeFlow
# VectorFlow
Sankalp Gilda
Raahul Dutta
Uri Shamay
+1
Sankalp Gilda, Raahul Dutta, Uri Shamay & 1 more speaker · Sep 28th, 2023

LLM Security

Robust Resampling for Time Series Analysis: Introducing tsbootstrap Time series data is ubiquitous across science, engineering, and business. Proper statistical analysis of time series requires quantifying uncertainty in models and forecasts. Bootstrapping is a powerful resampling technique for this, but classical bootstrap methods fail for time series because they disrupt dependencies. This talk introduces tsbootstrap, an open-source Python library implementing state-of-the-art bootstrapping tailored for time series. It covers essential techniques like block bootstrapping along with cutting-edge methods like Markov and sieve bootstrapping. We'll provide an intuitive overview of key time series bootstrapping approaches and demonstrate tsbootstrap functionality for resampling financial data. Users can leverage tsbootstrap for robust confidence intervals, model validation, and distribution-free inference. LLM Security is Real Topic - Safeguard Large Language Model Fortresses!!! Let's highlight a comprehensive agenda addressing the security and ethical challenges of deploying Large Language Models (LLMs). We'll cover topics ranging from LLM security and the risks of prompt injection, denial of service, and model theft to concerns surrounding insecure plugins and overreliance on LLMs. The agenda also includes strategies for tackling these issues and explores data poison mitigation through differential privacy. GeniA: Your Engineering Gen AI Team Member In this talk, Uri introduces a new OSS we've developed named GeniA, and discusses its role as an enabler for the future of platform engineering using Gen AI.
# tsbootstrap
# LLM Security
# GeniA
# developyours.com
# elsevier.com
Popular