MLOps Community
+00:00 GMT
MLOps Community
The MLOps Community is where machine learning practitioners come together to define and implement MLOps. Our global community is the default hub for MLOps practitioners to meet other MLOps industry professionals, share their real-world experience and challenges, learn skills and best practices, and collaborate on projects and employment opportunities. We are the world's largest community dedicated to addressing the unique technical and operational challenges of production machine learning systems.

Events

5:00 PM - 6:00 PM, Oct 30 GMT
Productionalizing AI: Driving Innovation with Cost-Effective Strategies
Successfully deploying AI applications into production requires a strategic approach that prioritizes cost efficiency without compromising performance. In this one-hour mini-summit, we'll explore how to optimize costs across the key elements of AI develop
avatar
avatar
avatar
Learn More
3:00 PM - 8:00 PM, Nov 13 GMT
AI Agents in Production

Content

video
Like many companies, Picnic started out with a small, central data science team. As this grows larger, focussing on more complex models, it questions the skillsets & organisational set up. Use an ML platform, or build ourselves? A central team vs. embedded? Hire data scientists vs. ML engineers vs. MLOps engineers How to foster a team culture of end-to-end ownership How to balance short-term & long-term impact
Oct 8th, 2024 | Views 147
video
Being an LLM-native is becoming one of the key differentiators among companies, in vastly different verticals. Everyone wants to use LLMs, and everyone wants to be on top of the current tech but - what does it really mean to be LLM-native? LLM-native involves two ends of a spectrum. On the one hand, we have the product or service that the company offers, which surely offers many automation opportunities. LLMs can be applied strategically to scale at a lower cost and offer a better experience for users. But being LLM-native not only involves the company's customers, it also involves each stakeholder involved in the company's operations. How can employees integrate LLMs into their daily workflows? How can we as developers leverage the advancements in the field not only as builders but as adopters? We will tackle these and other key questions for anyone looking to capitalize on the LLM wave, prioritizing real results over the hype.
Oct 4th, 2024 | Views 287
video
Bending the Rules: How to Use Information Extraction Models to Improve the Performance of Large Language Models Generative AI and Large Language Models (LLMs) are revolutionizing technology and redefining what's possible with AI. Harnessing the power of these transformative technologies requires careful curation of data to perform in both cost-effective and accurate ways. Information extraction models including linguistic rules and other traditional text analytics approaches can be used to curate data and aid in training, fine-tuning, and prompt-tuning, as well as evaluating the results generated by LLMs. By combining linguistic rule-based models with LLMs through this multi-modal approach to AI, we can help to improve the quality and accuracy of LLMs and enable them to perform better on various tasks while cutting costs. We will demonstrate this innovation with a real-world example in public comment analysis. Scaling Large Language Models in Production Open source models have made running your own LLM accessible many people. It's pretty straightforward to set up a model like Mistral, with a vector database, and build your own RAG application. But making it scale to high traffic demands is another story. LLM inference itself is slow, and GPUs are expensive, so we can't simply throw hardware at the problem. Once you add things like guardrails to your application, latencies compound. BAML: Beating OpenAI's Structured Outputs We created a new programming language that allows us to help developers using LLMs get higher quality results out of any model. For example, in many scenarios, we can match GPT-4o performance with GPT-4o-mini using BAML. We'll discuss some of the algorithms that BAML uses, how they improve the accuracy of models, and why function calling is good and bad.
Oct 3rd, 2024 | Views 136