MLOps Community
+00:00 GMT
Sign in or Join the community to continue

Ux of a LLM User // Panel 2

Posted Jul 17, 2023 | Views 728
# LLM in Production
# User Experience
# LLM User
# innovationendeavors.com
# bardeen.ai
# adobe.com
# Jasper.ai
Share
speakers
avatar
Davis Treybig
Principal @ Innovation Endeavors

Davis is currently a principal on the investment team at Innovation Endeavors, an early-stage venture firm focused on highly technical companies. He primarily focuses on software infrastructure, especially data tooling and security. Prior to Innovation Endeavors, Davis was a product manager at Google, where he worked on the Pixel phone and the developer platform for the Google Assistant. Davis studied computer science and electrical engineering in college.

+ Read More
avatar
Dina Yerlan
Head of Product, Generative AI Data @ Adobe, Firefly

Head of Product, Generative AI Data at Adobe Firefly (family of foundation models for creatives)

+ Read More
avatar
Artem Harutyunyan
Co-Founder & CTO @ Bardeen AI

Artem is the Co-Founder & CTO at Bardeen AI. Prior to Bardeen, he was in engineering and product roles at Mesosphere and Qualys, and before that, he worked at CERN.

+ Read More
avatar
Misty Free
Product Manager @ Jasper

Misty Free is a product manager at Jasper, where she focuses on supercharging marketers with speed and consistency in their marketing campaigns, with the power of AI. Misty has also collaborated with Stability and OpenAI to offer AI image generation within Jasper. She approaches product development with a "jobs-to-be-done" mindset, always starting with the "why" behind any need, ensuring that customer pain points are directly addressed with the features shipped at Jasper. In her free time, Misty enjoys crocheting amigurumi, practicing Spanish on Duolingo, and spending quality time with her family. Misty will be on a panel sharing her insights and experiences on the real-world use cases of LLMs.

+ Read More
SUMMARY

Explore different approaches to interface design, emphasizing the significance of crafting effective prompts and addressing accuracy and hallucination issues. Discover some strategies for improving latency and performance, including monitoring, scaling, and exploring emerging technologies.

+ Read More
TRANSCRIPT

 All right. Our next panel, um, is an awesome group. We have Misty from Jasper ai. We have Davis from Innovation Endeavors. We have Dina from Adobe Firefly, and Artem, the CTO of Verdin. Um, and they're all coming together to chat about UX of l l M users. So without further ado, let's bring them up to the stage.

Hello Davis. Hello, Misty. Hello, Artem. And last but not least, Dina. All right. Thank you guys so much for joining us. Yeah, thank you Lily. Uh, are we good to get started? Yep, go ahead. All right. Uh, hi everyone. I'm the moderator for this, uh, panel today. Um, today we're gonna focus on UX and LLMs, which is a really interesting space.

And so maybe to start, uh, Misty Artina, maybe we can go around quickly one by one, and you can e each briefly introduce yourself and in particular talk about some of the recent LM features you've worked on and dive into some more you specific questions. From there,

Missy, you wanna go first? Yeah, so I'm Misty. I'm a product manager at Jasper. Um, and Jasper is really focused on supercharging, uh, content marketers especially, and, um, content marketing teams with their content creation. So a feature that we recently rolled out and I actually just got off of a, a webinar introducing it to.

To the customers is Jasper campaigns. So, um, we've introduced that ability with just one set of context, with your selected brand voice to create all of the assets that you need for a campaign instantly, um, and be able to add anything really quickly with that. Um, so that's been a big thing that we've been really focused on at Jasper.

That, and just in general, making content creation as easy and seamless and not a headache for users as possible. Awesome. Artem, you wanna go next? Sure. Thanks Davis. Uh, I'm Maram, I'm, uh, co-founder, cto, uh, at ai. Um, we're a platform for, uh, Creating web, web browser based automations with a heavy use of, um, ai, uh, across the board.

Um, the, so literally today we, uh, we, uh, the launch that we did was, uh, we launched our chat G p t plugin. Um, and basically what it does, it, it allows to, uh, build automations, uh, using natural language. And this is, Kind of built on top of the, the feature that we have inside the product, which we call Magic Box, where, uh, you just type what you want to do, uh, and we create an automation, uh, for you, uh, uh, on the fly.

Awesome. And then Dina. Great. Hi, I'm Dina, uh, product manager at Adobe working on fire flights. A family of, uh, generative AI models for creatives. It does, uh, text to image, uh, text to fonts. Um, I mean, it's, it's a model. It has a. Zone website where you can, uh, you know, generate different pictures. But we also recently integrated Firefly into different flagship products with Adobe.

So it's now powering things like, uh, Adobe Photoshop, generative feel. Um, it's on, uh, illustrator, uh, generative Recolor, and then it's also an Adobe Express and hopefully more products. So, so yeah, excited to be on this panel. Uh, disclaimer, not a UX designer. Uh, so. You know, they're probably better people to talk about UX design, so I'll, I'll, I'll try my best.

And also, yeah, just representing mainly my experience working on this both inside outside Adobe. So not representing any, uh, product, uh, not talking about any product roadmap, just my own views. Awesome. Um, and so maybe to start, I thought it'd be good if you could each go one by one and talk about what's the single biggest UX challenge that you face in kind of building the product that you just described.

And maybe you could talk through kind of the different things you've tried and ultimately what you had to figure out in order to kind of solve that UX piece of the product you've built. And so, Misty, why don't we start with you on that. Yeah. So, um, like I talked about at Jasper, we're really trying to solve, um, especially the content creation, pain point for marketers.

Um, and we are extending outside of that into more of the, the research and the ideation that happens before content, and then more of the publishing and distribution that happens after. But just within that content creation piece we're seeing, and I'm sure everyone is feeling this, the fatigue of. So many tools coming out and so many different ways of using ai.

And you can do anything from using chat to using like an extension like ar prm to get a prompt. And so there's so many different paths and I think everybody went through this phase of being so excited and enthusiastic to try anything and everything. Um, and now people are starting to feel more and more of that fatigue.

And so, um, a big challenge that we have is, Is continuing to, to try and deliver the most efficient, delightful way for people to solve those content creation problems while balancing that fatigue of, ugh, another new thing that I have to try. Um, and so, so that's something that, that we're always working on and always making sure that are, any new features that we do are really well validated with our existing customers and with the market in general.

Um, to make sure that there's an appetite for it and that we're not just further contributing to that burnout of new features. Artem, what about you? Um, I think kind of unpredictability and non-determinism off, uh, LLMs, uh, is a, is a ux is, that is definitely a challenge for users, um, in terms of experience because, uh, you pretty much never know what you're going to get.

Sometimes very simple things do not work, and then, you know, very sophisticated things work surprisingly well. Um, so, uh, in terms of a challenge, we, um, um, and I, I mean, I, I very much. I kind of wanna echo what Mr said in terms of fatigue and hype, uh, you know, around this. So I think at this point, users expect, uh, a thing that would work.

And so for us, the challenge was to embed this new amazing kind of capability that, um, LLMs provide into the, into the flow of our product and make it seamless and make it so that it really. Makes user experience better, um, as opposed to just you being there because everyone does that and because it's a cool thing to do.

Um, so for us, the main challenge was to kind of, you know, design the experience around the fact that, um, you know, LLMs are not perfect. And kind of come up with a modality, come up with a way that would, that like basically almost make it that so users don't notice it. Right. And if you think about it, a lot of amazing products.

Early days had the same problem, like Google, uh, you know, with the very first versions, you almost never got your results right. And they even, but I think the way they sold it is with, with the ux. So when you search something, they drop you on the page with the results, but the search box is still on top.

And so like naturally if, if, if what you're looking for was not there, you would go and refine that search and later as the product matured, they started doing a better job at it. So we're now, when I'm starting to type, there are like five things at the dropdown that I can click, but it's essentially the same thing.

It's this kind of a play between. You know, your model and your user were like, they, they do this like sequence. And I think that the biggest challenge is how to make this sequence seamless. So it's like very natural to the human being, uh, who is in front of the computer using your product. And we may have just lost Dina for a second, and so maybe a quick follow up on that.

Um, can you talk through some of the core design patterns that you built out that kind of helped you solve that problem? So, I think one thing you touched on, for example, is refinement workflows. You know, the LM produces an initial response. You allow people to edit. What are different things you've tried to make that work, and like any learnings on what worked well when you're doing that and what didn't work well when you were doing that?

For sure. Yeah. I think one thing that, I mean, we literally did what Google is doing, right? So now when, like if you go into our product and try to generate, uh, you know, an automation from your description, uh, we will kind of drop you into the preview mode and we still have that, that box on top. So users naturally kind of, you know, know it is like, oh, okay, this is not exactly what I'm, what I was looking for, but let me go and refine that.

Um, the other thing I think is kind of like nudging users to, to, to, to give them the right, uh, idea of like, how to formulate their query. So basically think of it as like a, a, a dropdown with, um, with, um, You know, suggestions of like what might be, uh, relevant. And I think it's the interesting challenge there is to kind of use existing, like old school AI things like recommender systems, uh, you know, collaborative filtering, uh, to kind of find the right things to suggest to the user that they can go and further like, uh, use, use with an lm.

Uh, uh, I mean, it's a fascinating thing because if you think about it, the. Effort to go from like nothing to a 30, 40% working prototype is virtually zero. Like you can in an afternoon, you can, you can get something that get like, and then you post it on Twitter. You get excited, you slap a waiting list on top and like you have your new startup.

But then, You hit a, this almost like a vertical wall of like, you know, if you want. Cause that's not a product. Like, you know, if your product only works three out of 10 times, users are not happy with it. They don't care if it's new technology. So, and then to make this improvements further, then it becomes like really challenging and you have to find this new way, like, Creative ways to, uh, to actually kind of overcome this and, you know, have other things help you, help your users use the l l M in the right way.

That makes a lot of sense. Um, I think both of you touched a little bit on what I would describe as like the broader idea of interfaces, right? So natural language is obviously a more common interface with LMS in general, even natural language. There's different ways to express it from chat to maybe just a one shot language command.

And then a lot of teams are actually moving away from. Language at all in LLMs and you try to abstract it behind, you know, a button. I'd be curious how you guys have thought about, um, interfaces for LM based products and how you've learned or thought through like what is the right interface for these different types of features.

Maybe Misty, we can start with you. Yeah. And yeah, Arjun, I can definitely relate to that. Like overnight, we have so many engineers all the time that we'll build a p ooc and we all get really excited, like, wow, you could do that in a night. Um, but yeah, then you hit that vertical cliff. So, um, yeah, we, uh, have kind of been all over the map in Jasper and have had a lot of internal debates and have done a lot of testing over whether.

The chat interface, I'm like leaving it totally freeform is better. Or, um, we actually just this morning, um, obscured our command bar, which our command bar was like a very loved, very cool thing where within a document editor you had a bar where you could type in commands and generate outputs and, uh, with one click, add them to your document.

Um, and after a few months of use, we realized that it was clunky and it was in the way. So it's something that we're always evolving and iterating on the, uh, product I talked about at the beginning of this Jasper campaigns. That started off with a chat interface with just your typical bot interaction of tell me about what you're promoting, um, tell me about what your goals are and who's your target audience.

And we, um, after some feedback decided that that was even too difficult, that having to read and to write so much in order to get to that path of content generation was too cumbersome. Um, and therefore move to more of that button interface. Um, so, uh, it is something that we are gonna be evolving and changing every day.

Like we took out the command bar this morning, maybe we'll bring it back today. I don't know. Um, but we will always get that feedback and, um, a lot of that feedback is coming through, what we're seeing people actually use, which we'll see by often what they're copying out of their outputs and taking someplace else.

Um, but yeah, always changing. Do you have any, I'm curious, given you've tested so many things, do you have any intuitive rubrics or rules for when you think, oh, this type of problem, space, this type of feature, lends itself more towards this type of interface? Or is one of your learnings that in a lot of these cases you just need to test them all and see, and it's hard to know priori what's gonna be the best?

Yeah, it's really hard to know what's gonna work best in any situation. Um, something that we are seeing. Consistent, really positive feedback about right now is a toll we introduced a few weeks ago called Dynamic Templates. Um, and a follow up to that was something we've called remix. So, um, dynamic templates have given users the ability to just in a freeform field, tell us exactly what you're trying to do.

So like I am an SEO specialist and I have some set of keywords that I'm trying to target and I'm trying to create Google ads. Um, and then from that we will dynamically generate a template where we have an input field for the keywords that you mentioned you wanna give us. And we have an input field for more context, and then we will output the ads that you've asked us to output.

Um, and that has been something that, um, was kind of a big bet because it was something where like, yeah, it takes a lot of upfront work. Um, but people pretty consistently have loved it and have just found it to be really powerful. Um, so that is something where we are taking that power and trying to implement that elsewhere in the app.

Interesting. Um, and so Dina, welcome back. Um, I think maybe, um, and so maybe we can go to you for a second and I, I think I know one thing that we discussed a little bit over email before this call was that. Prompting was something that you guys have thought a lot about at Firefly and how do you help users know how to craft prompts, create prompts?

How do you maybe avoid the need for them to do prompt engineering? Um, well maybe you'd love to talk a little bit how you thought about that at Firefly, and I think it relates to this question of interfaces. How often do you wanna expose prompt at all to the user? Yeah. Yeah. I think, uh, being able to, the prompt is right now, right?

Cuz we do text to image. That's the main thing that you, you use to, uh, generate the image of your, that you're imagining. So, uh, most people, I mean this is a pretty new technology, like prompting is basically the new Googling, right? Most people. Might not know how to prompt it properly. So if you look at some of the data we're seeing, like an expert, right?

Somebody who has spent a lot of time perfecting their prompt engineering skills, going to get so much better results, uh, like 90 percentile results versus somebody who just says, gimme a picture of a dog, right? You have to add much more details on top of that. And I know we've been saying that prompt engineering in the future might go away.

You know, it'll be much, much, much easier to, to do prompt engineering. But from what I'm seeing is that you really do need to put in, um, a good prompt to get a good result. So I think, uh, there are many ways to do it is you could out of fill, uh, a prompt. So when you, when you get a prompt, uh, you can essentially use e i ml.

There to essentially, uh, predict what people want and you might have your own data to essentially have an idea. So instead of typing the whole thing and having a whole sentence prompt, you actually help the user. And then I think what we do on the fire, uh, fire flag website is that, um, after prompting. Um, I think it's really good to have predefined styles.

So we have predefined styles. Uh, it's essentially in a form way. So you'd select a resolution. You select is it going to be classical style? Do you want an anime style, right? Uh, maybe that's something you don't wanna put into your prompt cuz it kind of like, um, takes up your space, right? So if you look at me, journey prompts are very long.

They're like photorealistic, uh, 3d. Extreme resolution and then you keep adding V five things, so, so I think that eventually will go away and this is something we're still working on. It's like, how do you make it very easy? Just put what you really want in the prompt and then maybe have some type of canvas predefined styles.

Job select, right? Combination of all these things to really help user to generate what they actually want. And then I, I also really like, uh, generating variations. So then you learn from these things and you can map it back and you make your prompt suggestion better. So I think, uh, we're still at the very early stage where I, I'm not sure what would be the, the right combination of tools, but I think, um, as we keep learning, as we keep collecting these feedback, we'll, we'll have a better idea.

Um, on how to really empower, um, you know, users to generate what they want. Artem. I know you mentioned prompting, uh, earlier in both, uh, you also talked about like AI suggestions for prompts. I don't know if you guys also do like templatize prompt configurations, similar to what Dina's mentioning or some of the other things like that.

I'm curious, any insights you've had on how to abstract or handle prompting from the user's perspective on your end? Especially cuz you have a very, uh, yeah. Is a very complex task. I can create these ar ar arbitrary automation, so in some ways it's a more complex design space from a prompting angle. Yeah.

Um, I think, uh, the, the, the context is important here. So for, for instance, for the creating, designing automations, we want to be, uh, we want to have as few restrictions as possible. Uh, and then just let the user tell whatever they want. And like some users are very chatty and some users are very kind of, you know, Concise.

And so we kind of compensate by just trying to do as much of heavy lifting as possible on our end. Um, and just, you know, because again, the surface area to cover there is so big that it's like very hard to come with something ized. Now when it comes to kind of. Other aspect in the product. So again, we're an automation platform and we, but if you want to say like, okay, I want to create some, let's say outreach workflows for myself.

So the task here is much more defined here. I want to generate text. And I think what, uh, Dina said, uh, you know, makes a lot of sense. It's in our context as well where you basically, uh, there's a spectrum, like one spectrum is like you just, it's a box where you have to. Put your entire prompt yourself that the end of the spectrum is just one button.

It's just like generate outreach Gmail for me. And I think what we have in the middle is that, um, sort of a template or like, uh, the, the, the, the kind of the dropdown selection where we guide the user through like, okay, you're trying to create an outreach Gmail now that we know the task. We know the dimensions across which we need specialization or we need your input.

And so we let them pick those dimensions. But then what we have at the end, we have actually, we have researchers who, that we're basically constantly evaluating, um, a lot of different prompts, different combinations, how to. Like, call out the model, how sensitive is the model to different changes that we make and we, we do like, you know, formal EV evaluation, so we try to get a few kind of input points from the user.

But then what happens in the backend is we basically construct the pro, like construct, the actual prompt that we want to send, uh, to, to the model on the, on the fly. So, yeah, I mean, you know, tldr r just thinking about UX access at large is, um, you wouldn't fly a fighter jet with like a text message, right?

Like, you need, like the interface there is very kind of detailed, overwhelming for someone who's not a specialist. Um, and, um, but, but I mean, at the same time, you don't want to put like a fighter jet, like, you know, Like really, really complex UI in front of, in front of the, the, the person. So I think what we try to do is like, as quickly as possible to try to figure out intent.

And then once we know the intent, just have a kind of high specialized, highly kind of tuned something that would be like put user into a familiar. Place like, because like users know how to click through with wizard, like for 20 years they have been like primed to go through this like very classic ux.

And then I think, you know, the, the secret is that there is no magic bullet. You have to kind of think about your domain and make something that, that is familiar to the user, but at the same time kind of leverages all the, you know, kind of magic behind the scenes. That makes a lot of sense. Um, so maybe let's move on to the topic of accuracy.

And I think we all know that hallucinations are big issues with large language models, and it's also hard to set user expectations around when will this be right? And when might it be wrong and you need to check it. So maybe Misty, we can start with, you would love to get a sense of how you guys think about constraining the output of these models, making sure they're accurate, or helping users understand when it might be inaccurate and they're gonna need to.

Yeah. So, um, that is something that again, we're gonna continue to improve on, but right now, um, the best way that we're ensuring accuracy with Jasper is with this toggle that we have in Jasper Chat. You can actually enable Google search. So that is something that we're seeing a lot of users use when.

They're not only wanting to creatively write content and generate content, but they're also wanting to have that fact checked against some resources from the web. Um, and we will even include in some prompts, or we'll see users include in some prompts, um, to include URLs. So the references, um, and those URLs.

Well, from what I've seen, for the most part there are real URLs, but there are still cases of those hallucinations where it's like fact check.com and, and it's not, um, or sorry, that probably is a real url. It don't go to that. But, um, where it will be something made up. Um, and so, Um, we are still recommending that people fact check anything that is stated.

I mean, we've even had, like, just as a workshop we've done like write the legal defense of robot marriage and like Jasper did an amazing job writing a legal defense and defending with all of these historical Supreme Court cases that obviously never happened. Um, so yeah, it will be something where we're continuing to, to add.

Um, potentially like accuracy scores in the ui. That's not something that we have today, but, Um, but the capability is growing to be able to score accuracy and display that to a user. So something we'll be thinking about and implementing in the future. Hmm. And maybe Dina would love to get your take on the same space.

I mean, I think you have a more creative products. Maybe there's less issues about. Fact checking per se. At the same time, I assume you still may have issues with the model being, uh, probabilistic and maybe not doing exactly what I expect as a user. So have you guys had to deal with problems like that and how have you approached it at Firefly?

Yeah, definitely. That's a, it's a big problem, um, especially when you're generating images. It's, it's about the, like, uh, we do have on the web, uh, web ui, like thumbs up, thumbs down, and then we do ask a lot of feedback. So first it's like, is that what you really actually wanted? Right? That that's the information we want.

And then the accuracy is this. Quality Good cuz it's possibility, right? We'll generate something that's poor quality, that's, you know, corrupted. Also, it's also content moderation. I think that's a big part in sfw. We wanna make sure we're, we're keeping our users safe and not accidentally generating something that is not.

Uh, safe. And so we let them kind of report these issues and we save that. And with our, like, when we are developing our foundational model, like it really relies on the data, like the causes of all of these inaccuracies, like goes back to the data. And so we have to improve our model based on these results, or also improve based on the prompt side, based on the, you know, like the.

Post generation side. So there's a lot of things we do, um, to essentially interact, see that it's safe. And I, I think it's going to be a big field going forward for LLMs because, uh, frankly, right, like you, you, you have to now do it yourself, essentially. So, so very excited for, you know, different people to work on these w filters, classifiers models, gen ai, content moderation type tools to sort of make that process easier because that's where the kind of like, that's the last mile.

Uh, type problem that becomes really relevant when you go to production. Mm-hmm. And Artem maybe would love to get your take on, uh, maybe two of the things Dina mentioned. So one is this trust and safety layer. Is that an issue for you guys that you've had to think about? And then number two, I think Dina brought up the idea of, um, measuring, uh, performance and getting feedback and using that to improve the model over time.

How do you guys think about what signals to collect and how to quantify performance over time? Just throwing in a quick two minute warning. Thank you. Um, yeah, I mean, for sure it's an issue because like you would be surprised at things that people want to, uh, automate. Um, you know, um, uh, you know, in, in, in terms of accuracy, I think we kind of, we knew it was going to be a challenge, so we kind of went overboard.

So we basically designed a language, a dsl, a domain specific language for defining automation that. Has in built properties that make it easy for us to verify the accuracy. And then we also, again, for us it's, we're, we're not trying to take humans out of the loop. So for us is the automation is like, we want to show that we want to, like human is the people are the best kind of verifiers of like accuracy.

And so we show it to them and say like, Hey, is this your intent? And then along the way, like, uh, Dana was mentioning we collect a lot of telemetry data. Like thumbs up, thumbs down. Uh, how many attempts does it take them to get to the automation they want? Are they happy with it? How often do they use it afterwards?

And then of course, all this data we're, we're feeding back to our kind of, uh, you know, creation cycle. So we continuously keep improving on that. Got it. That's interesting. Uh, and let's see, we may have one minute left. Maybe we can do a lightning round. And one last question. I do find interesting on the UX side, uh, latency performance can be a huge deal for LM applications that massively affect UX and iteration speed of the user.

Maybe really quickly, would love to talk to each of you guys, maybe starting with Misty, then Dina, then Artem on. Has it been an issue for you and have you done anything interesting technically to improve latency? Yes. Um, latency and error spikes, um, have been an issue. So, uh, from our context, Jasper is an application layer that sits on top of these models.

And so a lot of the time that latency and those error rates are sort of out of our control. Um, and we have built, um, data monitoring and reporting using Datadog. So we have really good insight into any time those spikes are happening, where wait times are becoming especially long, or, um, or error rights are becoming especially high, we're then able to post, um, messages and the dashboard if there's something especially problematic like, Hey, Jasper's running into some snacks.

Please be patient. We're working on it. Um, and then we are also like, Probably more importantly, we have implemented fallback strategies anywhere where that's been identified as an issue. Um, so we're able to quickly switch over to a different model or different provider in order to hopefully deliver better performance and get people responses as often as possible and at the highest quality possible.

Yeah. Uh, I would say we do is that every day scaling, it's honestly a good problem to have when you're, when your expectation was exceeded by so much that now you have to scale and change compute or like stop something to send compute there. So definitely. Problem. Um, but, uh, but yeah, I think I'm excited to see, uh, especially on the serving side, how they serve.

But I think if you look at our website, our latency is pretty good, uh, because we've been serving these ML based features on top of Photoshop and different products. So building, uh, on previous generation of, uh, of architecture, but also like exploring new things, um, to make it better. I mean, we kind of, there are some things where we can just use traditional boring solution to the latency, which is caching.

Um, and so we, we have do that. So if like you wanted to create an automation, it's going to be, and you described it the same words as I, I don't need to run through l l m to get you. So, um, uh, we do that. Um, and then, uh, I think I'm super excited about small models, uh, because then we will be able to run them on the edge device's.

Problem is like highly specialized, my smaller models that, that we can distribute. Uh, so are you, are you running any models in the browser today or Not yet? I know you're a Chrome extension. Yes. Yes, we are. We are. We have a very tense, like, it's, it's amazingly, it, it's so good, but like you don't, first you have, don't have to pay for it, then it's so much faster.

So, yeah. Yeah. Awesome. Uh, well I think we're probably a little over time. Um, I dunno if the moderator needs to come on, but thank you all so much for, uh, doing this. Yeah. Thanks for inviting me. Great chat. Bye. Thanks for having us. This was awesome. Thank you so much, Davis, Misty, Artem, Dina, especially Davis, for guiding the conversation.

It's not an easy feat with like a couple people, so that was awesome. Definitely. Um, yeah, all these talks will be recorded and then we're also, I guess there were no slides for this one, so I will not be sharing slides. But thank you all so much for joining us. This was awesome. Thanks. Well, all right, so.

+ Read More
Sign in or Join the community

Create an account

Change email
e.g. https://www.linkedin.com/in/xxx or https://xx.linkedin.com/in/xxx
I agree to MLOps Community’s Code of Conduct and Privacy Policy.

Watch More

45:18
Building Products // Panel 2
Posted Jun 28, 2023 | Views 409
# LLM in Production
# Building Products
# MLOps
# Redis.io
# Predibase.com
# Humanloop.com
# Anyscale.com
# Arize.com
# Nvidia.com
# TrueFoundry.com
# Premai.io
# Continual.ai
# Argilla.io
# Genesiscloud.com
# Rungalileo.io
# Gantry.io
# Zilliz.com