MLOps Community
+00:00 GMT
Sign in or Join the community to continue

Fine-Tuning LLMs: Best Practices and When to Go Small

Posted Jun 01, 2023 | Views 2.2K
# Large Language Models
# LLM
# AI-powered Product
# Preemo
# Gradient.ai
Share
speaker
avatar
Mark Huang
Co-Founder @ Gradient

Mark is a co-founder and Chief Architect at Gradient, a platform that helps companies build custom AI applications by making it extremely easy to fine-tune foundational models and deploy them into production. Previously, he was a tech lead in machine learning teams at Splunk and Box, developing and deploying production systems for streaming analytics, personalization, and forecasting. Prior to his career in software development, he was an algorithmic trader at quantitative hedge funds where he also harnessed large-scale data to generate trading signals for billion-dollar asset portfolios.

+ Read More
SUMMARY

With the open source releasing foundational models at a blistering pace there has never been a better time to develop an AI-powered product. In this talk, we walk you through the challenges and state-of-the-art techniques that can help you fine-tune your own LLMs. Additionally, we will provide guidance on how to determine when a small model would be more appropriate for your use case.

+ Read More

Watch More

38:03
From Research to Production: Fine-Tuning & Aligning LLMs // Philipp Schmid // AI in Production
Posted Feb 25, 2024 | Views 1.2K
# LLM
# Fine-tuning LLMs
# dpo
# Evaluation
Pitfalls and Best Practices — 5 lessons from LLMs in Production
Posted Jun 20, 2023 | Views 1K
# LLM in Production
# Best Practices
# Humanloop.com
# Redis.io
# Gantry.io
# Predibase.com
# Anyscale.com
# Zilliz.com
# Arize.com
# Nvidia.com
# TrueFoundry.com
# Premai.io
# Continual.ai
# Argilla.io
# Genesiscloud.com
# Rungalileo.io