Sign in or Join the community to continue
Exploring the Latency/Throughput & Cost Space for LLM Inference
Posted Oct 09, 2023 | Views 1.2K
# LLM Inference
# Latency
# Mistral.AI
Share
speakers
Timothée Lacroix
CTO @ Mistral AI
+ Read More
Demetrios Brinkmann
Chief Happiness Engineer @ MLOps Community
+ Read More
SUMMARY
+ Read More
Watch More
Building LLM Applications for Production
Posted Jun 20, 2023 | Views 10.7K
# LLM in Production
# LLMs
# Claypot AI
# Redis.io
# Gantry.io
# Predibase.com
# Humanloop.com
# Anyscale.com
# Zilliz.com
# Arize.com
# Nvidia.com
# TrueFoundry.com
# Premai.io
# Continual.ai
# Argilla.io
# Genesiscloud.com
# Rungalileo.io
Making LLM Inference Affordable
Posted Jul 06, 2023 | Views 526
# LLM
# LLM in Production
# Snowflake.com
FrugalGPT: Better Quality and Lower Cost for LLM Applications
Posted Aug 22, 2023 | Views 499
# FrugalGPT
# Fine-tuning LLMs
# QuantumBlack
# Stanford University