MLOps Community
+00:00 GMT
# LLM in Production
# Large Language Models
# DevTools
# AI-Driven Applications

DevTools for Language Models: Unlocking the Future of AI-Driven Applications

Diego Oppenheimer
Diego Oppenheimer

Why is MLOps Hard in an Enterprise?

Maria Vechtomova & Basak Eskili

Data Privacy and Security

Diego Oppenheimer, Gevorg Karapetyan, Vin Vashishta, Saahil Jain & Shreya Rajpal

Challenges and Opportunities in Building Data Science Solutions with LLMs

Pascal Brokmeier, Daniel Herde & Viktoriia Oliinyk

Cost Optimization and Performance

Lina Weichbrodt, Luis Ceze, Jared Zoneraich, Daniel Campos & Mario Kostelac
See all
LLMs in Production Conference
24 Items
Popular topics
# Interview
# LLM in Production
# Presentation
# Open Source
# Case Study
# Model Serving
# Coding Workshop
# Monitoring
# Scaling
# Deployment
# Large Language Models
# Panel
# Cultural Side
# Kubernetes
# ML Platform
# ML Workflow
# ML Orchestration
# Feature Stores
# Data Science
Simba Khadder
Simba Khadder · May 26th, 2023

Why We Built an Open-source Virtual Feature Store? Learnings from Serving 100M Monthly Active Users

Simba Khadder shares his insights during a talk at the MLOps Meetup in San Francisco, highlighting their experience with 100 million monthly active users. They discussed their failures and learnings, primarily focusing on building a recommended system. By utilizing embeddings, they generated candidate song recommendations and refined the rankings. Simba emphasized the significance of resolving workflow challenges to enhance the effectiveness of data scientists. They introduced the concept of a feature store, which revolutionized their approach to data science.
# Open-source
# Feature Store
# Featureform
Joao Moura
Joao Moura · May 19th, 2023

Entreprise MLOps Patterns and Common Pitfalls

Drawing from his expertise, Joao emphasizes the significance of accessible data, standardized processes, and streamlined implementation in the MLOps landscape. He highlights the crucial roles and responsibilities of various team members involved, including data scientists and MLOps engineers, stressing the need for collaboration and cohesive efforts. Throughout the presentation, Joao addresses common challenges faced by organizations, such as the intricate nature of reversing production deployments and the absence of clearly defined responsibilities within organizational structures. By exploring these hurdles, he equips the audience with valuable insights to overcome obstacles and optimize MLOps practices.
# MLOps Patterns
# Common Pitfalls
Manuel Martin Gomez
Manuel Martin Gomez · May 13th, 2023

MLOPS: Desde las Trincheras

Busuu's journey from a greenfield to a mature MLOps platform is a story of growth, learning, and adaptation. As a language learning platform, Busuu relies heavily on machine learning models to provide personalized and effective learning experiences to its users.
# MLOps platform
# Language Learning Platform
# Busuu
Nils Reimers
Nils Reimers · May 13th, 2023

Large Language Model at Scale

Large Language Models with billions of parameters have the possibility to change how we work with textual data. However, running them on scale at potentially hundred millions of texts a day is a massive challenge. Nils talks about finding the right model size for respective tasks, model distillation, and promising new ways on transferring knowledge from large to smaller models.
# Large Language Models
# LLM in Production
# Cohere
 Deepankar Mahapatro
Deepankar Mahapatro · Apr 27th, 2023

Taking LangChain Apps to Production with LangChain-serve

Scalable, Serverless deployments of LangChain apps on the cloud without sacrificing the ease and convenience of local development. Streaming experiences without worrying about infrastructure
# LLM in Production
# LangChain
# LangChain-serve
Willem Pienaar
Willem Pienaar · Apr 27th, 2023

Emerging Patterns for LLMs in Production

As the landscape of large language models (LLMs) advances at an unprecedented rate, novel techniques are constantly emerging to make LLMs faster, safer, and more reliable in production. This talk explores some of the latest patterns that builders have adopted when integrating LLMs into their products.
# LLM in Production
# In-Stealth
Daniel Montilla Navas
Daniel Montilla Navas · Apr 27th, 2023

MLOps at Early-stage Start-up

Daniel Montilla Navas, a mechanical engineer turned programmer, shares his experience working in startups and his current role at Correcto, an AI tool for text correction and generation in Spanish. He discusses the challenges of developing the product, including dealing with different dialects and constantly changing objectives. Montilla Navas highlights the importance of research and decision-making in a small team, as well as the need for caution when moving quickly in the development process. Finally, he emphasizes the importance of continuous learning in the ever-evolving field of AI.
# Early-stage Start-up
Adam will highlight potential negative user outcomes that can arise when adding LLM-driven capabilities to an existing product. He will also discuss strategies and best practices that can be used to ensure a high-quality user experience for customers.
# LLM-driven Products
# Autoblocks
Cameron Feenstra
Cameron Feenstra · Apr 27th, 2023

Using LLMs to Punch Above your Weight!

As a small business, competing with large incumbents can be a daunting challenge. They have more money, more people, and more data, but they can also be inflexible and slow to adopt new technologies. In this talk, we will explore how small businesses can use the power of large language models (LLMs) to compete with large incumbents, particularly in industries like insurance. We will present two examples of how we are using LLMs at Anzen to streamline insurance underwriting and analyze employment agreements and discuss ideas for future applications. By harnessing the power of LLMs, small businesses can level the playing field and compete more effectively with larger companies.
# LLM in Production
# Anzen
Lina Weichbrodt
Luis Ceze
Jared Zoneraich
Daniel Campos
 Mario Kostelac
Lina Weichbrodt, Luis Ceze, Jared Zoneraich, Daniel Campos & Mario Kostelac · Apr 27th, 2023

Cost Optimization and Performance

In this panel discussion, the topic of the cost of running large language models (LLMs) is explored, along with potential solutions. The benefits of bringing LLMs in-house, such as latency optimization and greater control, are also discussed. The panelists explore methods such as structured pruning and knowledge distillation for optimizing LLMs. OctoML's platform is mentioned as a tool for the automatic deployment of custom models and for selecting the most appropriate hardware for them. Overall, the discussion provides insights into the challenges of managing LLMs and potential strategies for overcoming them.
# LLM in Production
# Cost Optimization
# Cost Performance
The Birth and Growth of Spark: An Open Source Success Story
Matei Zaharia
MLflow Pipelines: Opinionated ML Pipelines in MLflow
Xiangrui Meng
Age of Industrialized AI
Daniel Jeffries
Monzo Machine Learning Case Study
Neal Lathia