MLOps Community Mini Summit
# Milvus
# Jina AI
# Zilliz
Fresh Data, Smart Retrieval: Milvus & Jina CLIP Explained
Keeping Data Fresh: Mastering Updates in Vector Databases
Have you built a RAG (Retrieval Augmented Generation) system, but now face challenges with updating data? In this talk, we will explore how updates are managed in vector databases, focusing particularly on Milvus. We’ll explore the different challenges and practical solutions for maintaining data freshness in scenarios requiring high throughput and low latency. By the end of this session, you'll understand the mechanics behind data updates in Milvus and how to ensure your database remains both accurate and efficient.
Jina CLIP: Your CLIP Model Is Also Your Text Retriever
Your CLIP model under-performs in text-only tasks? In this talk, we will introduce our novel multi-task contrastive training method designed to bridge this performance gap. We developed the Jina CLIP model, having in mind to connect long documents, queries, and images in one space. During this session, you will gain insights into the methodology behind our training process and the implications of our findings for future multimodal and text-based retrieval systems.
+1
Stephen Batifol, Andreas Koukounas, Saba Sturua & 1 more speaker · Jun 19th, 2024
Popular topics
# Machine learning
# Presentation
# MLops
# LLMs
# Coding Workshop
# Interview
# Model Serving
# Python
# LLM in Production
# Hallucinations
# Machine Learning
# AI
# Machine Learning Engineer
# Monitoring
# Open Source
# Cultural Side
# Scaling
# Security
# DevOps
# Observability
+2
Mahesh Murag, Jose Navarro, Nikhil Garg & 2 more speakers · May 8th, 2024
Building a Unified Feature Platform for Production AI
Mahesh walks through Tecton’s journey to build a unified feature platform that powers large-scale real-time AI applications with only Python. He'll dive into how Tecton has navigated key tradeoffs like balancing developer experience with scalability and flexibility while adapting to rapidly evolving data and ML engineering best practices.
Journey of a Made-to-Measure Feature Platform at Cleo
Jose shows how the platform team at Cleo has built a production-ready Feature Platform using Feast.
DIY: How to Build a Feature Platform at Home
Nikhil decomposes a modern feature platform in key components, describes a few common options for each component, gotchas in assembling them, and some key architectural tradeoffs.
# AI Innovations
# Tecton.ai
# Fennel.ai
# meetcleo.com
Diana C. Montañes Mondragon & Nick Schenone · Apr 17th, 2024
Generative AI in Molecule Discovery
Molecules are all around us, in our medicines, clothes, and even our food. Finding new molecules is crucial for better treatments, eco-friendly products, and saving the planet. Different industries have been using Machine Learning and AI to discover molecules, but now there's gen AI, which can enable further breakthroughs. During this talk, we explore some use cases where gen AI can make a big difference in finding new molecules.
Optimizing Gen AI in Call Center Applications
There are many great off-the-shelf gen AI models and tools available, however, using them in production often requires additional engineering effort. In this talk, we explore the challenges faced when building a gen AI use case for a call center application such as maximizing GPU utilization, speeding up the overall pipeline using parallelization and domain knowledge, and moving from POC to production.
# Gen AI
# Molecule Discovery
# Call Center Applications
# QuantumBlack
# mckinsey.com/quantumblack
+1
David Espejo, Fabio Grätz, Arno Hollosi & 1 more speaker · Dec 6th, 2023
Flyte: A Platform for the Agile Development of AI Products
The development of traditional apps differs from how AI products are built, primarily due to distinctions in inputs (program vs data) and process structure (sequential vs iterative). However, ML/Data Science teams could benefit from adopting established Software Engineering patterns; bridging the gap that frequently impedes the transition of ML applications to production. In this talk, David will introduce Flyte, an open-source platform that empowers ML/DS teams to collaborate effectively and expedite the delivery of production-grade AI applications.
Flyte at Recogni
Recogni develops high compute, low power, and low latency neural network inference processors with unmatched performance amongst automotive inference processing systems. Fabio will share how ML engineers and researchers at Recogni leverage Flyte as part of their internal developer platform to speed up machine learning experiments for Recogni’s ML-silicon co-design, to develop a state-of-the-art automotive perception stack, and to compress and mathematically convert these models for deployment.
Lessons Learned from Running AI Models at Scale
Blackshark.ai is analyzing satellite imagery of the whole planet with AI. In this talk we explore the lessons learned from training and executing AI models at scale. It touches upon challenges such as managing large datasets, ensuring model reliability, and maintaining system performance. We will also discuss the importance of efficient workflows, robust infrastructure, and the need for continuous monitoring and optimization.
# Scaling MLOps
# Computer Vision
# Union
+2
Thomas Capelle, Boris Dayma, Jonathan Whitaker & 2 more speakers · Nov 22nd, 2023
Deep Dive on LLM Fine-tune
In his session, Thomas focuses on understanding the ins and outs of fine-tuning LLMs. We all have a lot of questions during the fine-tuning process. How do you prepare your data? How much data do you need? Do you need to use a high-level API, or can you do this in PyTorch? During this talk, we will try to answer these questions. Thomas will share some tips and tricks on his journey in the LLM fine-tuning landscape. What worked and what did not, and hopefully, you will learn from his experience and the mistakes he made.
A Recipe for Training Large Language Models
AI models have become orders of magnitude larger in the last few years.
Training such large models presents new challenges, and has been mainly practiced in large companies.
In this talk, we tackle best practices for training large models, from early prototype to production.
What The Kaggle 'LLM Science Exam' Competition Can Teach Us About LLMs
This competition challenged participants to submit a model capable of answering science-related multiple-choice questions. In doing so it provided a fruitful environment for exploring most of the key techniques and approaches being applied today by anyone building with LLMs. In this talk, we look at some key lessons that this competition can teach us.
Do you really know what your model has learned?
Leap Labs demonstrates how data-independent model evaluations represent a paradigm shift in the model development process. All through our dashboard’s beautiful Weights & Biases Weave integration.
# LLM Fine-tuning
# Large Language Models
# Weights and Biases
# Virtual Meetup
Pavol Bielik, David Garnitz & Ben Epstein · Nov 13th, 2023
Did you know that 87% of AI models never move from lab to production?
In fact, this is one of the biggest challenges faced by machine learning teams today. Just because a model excels in a test environment doesn't ensure its success in the real world. Furthermore, as you deploy AI models to production, they often degrade over time, especially with incoming new data. So, how do you know what is causing your AI models to fail? And how do you fix these issues to improve model performance?
Let's delve deep into these issues and provide insights on how you and your ML teams can systematically identify and fix model and data issues. Empowered by intelligent workflows, our end-to-end AI platform is built by ML engineers to enhance your model's performance across the entire AI lifecycle, all while unleashing the full potential of robust and trustworthy AI at scale.
# Model Blind Spot Discovery
# LatticeFlow
# VectorFlow
+1
Sankalp Gilda, Raahul Dutta, Uri Shamay & 1 more speaker · Sep 28th, 2023
Robust Resampling for Time Series Analysis: Introducing tsbootstrap
Time series data is ubiquitous across science, engineering, and business. Proper statistical analysis of time series requires quantifying uncertainty in models and forecasts. Bootstrapping is a powerful resampling technique for this, but classical bootstrap methods fail for time series because they disrupt dependencies. This talk introduces tsbootstrap, an open-source Python library implementing state-of-the-art bootstrapping tailored for time series. It covers essential techniques like block bootstrapping along with cutting-edge methods like Markov and sieve bootstrapping. We'll provide an intuitive overview of key time series bootstrapping approaches and demonstrate tsbootstrap functionality for resampling financial data. Users can leverage tsbootstrap for robust confidence intervals, model validation, and distribution-free inference.
LLM Security is Real Topic - Safeguard Large Language Model Fortresses!!!
Let's highlight a comprehensive agenda addressing the security and ethical challenges of deploying Large Language Models (LLMs). We'll cover topics ranging from LLM security and the risks of prompt injection, denial of service, and model theft to concerns surrounding insecure plugins and overreliance on LLMs. The agenda also includes strategies for tackling these issues and explores data poison mitigation through differential privacy.
GeniA: Your Engineering Gen AI Team Member
In this talk, Uri introduces a new OSS we've developed named GeniA, and discusses its role as an enabler for the future of platform engineering using Gen AI.
# tsbootstrap
# LLM Security
# GeniA
# developyours.com
# elsevier.com
Popular