Join us for our next virtual event that's all about the cutting-edge world of AI. We're bringing together some of the brightest minds from startups to enterprises. These practitioners will share their insights and experiences on using large language models in production.
I guarantee you will not find so many in depth technical discussions on the current challenges of using LLMs anywhere else. Virtual or in person. Period. - Demetrios
This isn't just another tech event. We're committed to creating an inclusive and diverse space where everyone feels welcome. We believe that the best ideas come from a mix of different perspectives, and we can't wait to hear yours!
Here is what you can look forward to:
Get into the technical aspects of deploying and managing large language models.
Hear from leaders in the AI industry, sharing their journey and the lessons they've learned along the way.
Connect with like-minded professionals who are as passionate about AI as you are.
Because who doesn't love a good surprise?
AI engineer, ML Eningeer, Product Engineers and Data Scientists. We promise it'll be fun, light-hearted, yet professional. Don't just take our word for it. Check out the last event to get an idea of the caliber of talks you can expect.
This is an opportunity to learn, connect, and be inspired. Register today and let's shape the future of AI together!
We are not chasing hype. You will hear from real practitioners who have hands on experience incorporating LLMs into their products. These aren't twitter demos. - Demetrios
Huge thanks to our partners for this edition Prosus. Because of their support, we will be able to rent a live broadcasting studio on the day of the event.
Things are getting pretty serious...
The future of Generative AI will be shaped by many factors like scaling laws, the evolution of agents, multi-modality, open-source contributions etc. However, challenges such as GPU & Talent shortages and regulations could pose obstacles. Join us as we delve into the fascinating world of Generative AI and explore the key drivers that will shape its development in the next three years.
This tutorial starts by surveying the different ways we can use LLMs. Then, we will take a deeper dive into various LLM finetuning strategies, such as low-rank adaptation, and learn how we can create custom LLMs using open-source software.
Getting the right LLM inference stack means choosing the right model for your task, and running it on the right hardware, with proper inference code. This talk will go through popular inference stacks and set-ups, detailing what makes inference costly. We'll talk about the current generation of open-source models and how to make the best use of them, but we will also touch on features currently missing from the open-source serving stack as well as what the future generations of models will unlock.
Prosus AI, a top-tier applied AI centre, drives rapid experimentation and implementation of AI throughout Prosus's global portfolio, which includes over 80 technology companies with more than 800 AI experts. In this talk we show how AI is harnessed for discovery within the Prosus network. We will share insights gained from 10,000 colleagues who utilise generative AI daily across the group, significantly enhancing the impact of their work.
Get your prompts ready and put them in the chat so Demetrios can improvise a song with them
Making LLMs reliable is hard. You can't debug or unit test them, not in the traditional sense at least. Instead, you'll need to turn to the practice of Observability, by instrumenting your feature to produce rich telemetry and analyzing behavior from that data. Observability can also act as a key source of data for evaluations.
Unlock the power of real-world LLM use cases and learn how to keep them grounded and deliver accurate results through techniques such as Retrieval Augmented Generation (RAG). Leverage Databases with vector support as a bridge between LLMs and your enterprise Gen AI apps.
So youâve built your first LLM product. Now what? If youâre a Product person, you need to understand how people are using it, and how its performing. Thatâs where product analytics come in. But its a totally different problem to product analytics for graphical user interfaces - you need to understand mountains of text. This talk will cover the key considerations for building great end user experiences with LLMs, from a Product Managers perspective.
How to fix errors that are identified (do you do some more prompt engineering??)
⢠Martian is focused on building a model router to dynamically route every prompt to the best LLM for highest performance and lowest cost. ⢠Corti, the Al Co-Pilot for health care uses Al to improve patient care, demonstrating the potential of Al in healthcare and medical decision-making. They recently raised $60M, with Prosus being one of the lead investors. ⢠Transforms is pioneering in synthetic entertainment, showing how Al can transform the way we create and consume media.
Gandalf, the light-hearted AI game created by Lakera, lets anyone experience LLM-powered chatbots and their security vulnerabilities in a fun and educational environment. The premise is simple: Gandalf knows a bunch of secret passwords. He also knows he shouldnât tell you what they are. So now it's up to you to find out if you can trick him into revealing his secrets. But beware: each time Gandalf levels up, he becomes more defensive and more ingenuity is required to fool him. Gandalf has been played by over half a million people and is used by organizations around the world to raise awareness about AI security.
AutoGPT sparked the imaginations of millions, itâs exciting because you can see what they will be able to do just by talking to them like you would with a human. It blew up because of this, not because of actual use case. First, what is an interact through natural language, performs actions in the real world. RPA. What agents could do in the future. But they suck right now. Why no actual use case yet? Reliability Memory, llm model complexity, architecture, tokens per second But ultimately we need a loss function to do tdd and improve agents. Getting thousands of prs and no way to test them. Don't know how to step if you don't know where you're going. How to get there, related to building language models Performance (benchmark) Safety (monitor) Standardization (agent protocol) Research pedigree is no longer the barrier to making an impact in the space, creativity is. And now there's a clear way to make it happen, mention and some of the work there AutoGPT
Discover how GetYourGuide navigates the dynamic landscape of LLMs and delivers products valuable to consumers and business. Will cover decision-making process of when to opt for LLMs over supervised models, offering practical insights into implementation and how these are put into production at GetYourGuide. On top of this will go deeper into strategic LLM prioritisation, streamlining their integration into product processes, and ensuring safe deployment to consumers.
This panel will explore the transformative role of AI in EdTech, discussing its potential to enhance learning experiences and personalize education. Panelists will share insights on AI use cases, challenges in AI integration, and strategies for building a differentiated business model in the evolving AI landscape. The discussion will also look ahead at how the latest wave of GenAI is set to shape the future of education. Join us to understand the exciting prospects and challenges of AI in EdTech.
Put down the screen for a moment, close your eyes and bliss out in between the sessions..
How can companies best build useful and differentiated applications on top of language models? Many of the products and companies built do this by providing the relevant context to LLMs and asking them to reason appropriately.
In this talk, Harrison will discuss the different types of context you should be aware of, the different levels of cognitive architectures that are emerging, and how LangChain and LangSmith are built to help with this journey.
A video-game themed walkthrough of LLM products in today's markets.
Deploying LLMs to products is no easy feat. It's common to have dozens of model variants when trying things out. As usage scales up, cost-to-serve and latency become primary concerns. In this talk, we will dive into how Fireworks.aI GenAI Platform helps developers on the journey from early experimentation to highly loaded production deployments without breaking the bank.
Data quality is the foundation of successful Generative A, traditional ML, and data-driven initiatives. In this talk, I will share our research results on this as there is a tangible impact of poor data quality on model performance and training cost.
In this talk I'll cover some key things to keep in mind when building ML applications using LLMs regarding privacy.
What I've learned from building tools for self-driving cars, and how might we apply those learnings to building LLM applications?
The rapid advancements in Natural Language Processing (NLP) have paved the way for the deployment of Large Language Models (LLMs) in real-world production systems. This talk aims to provide a succinct overview of the current state of Large Language Models (LLMs) in production, emphasizing their capabilities, deployment strategies, and the challenges encountered.