MLOps Community
+00:00 GMT
Sign in or Join the community to continue

Managing Data for Effective GenAI Application

Posted Mar 05, 2024 | Views 562
# Generative AI
# Data Foundations
# QuantumBlack
# mckinsey.com/quantumblack
Share
speakers
avatar
Anu Arora
Principal Data Engineer @ QuantumBlack

Data architect(~12 years) and have experience in Big data technologies, API development, building scalable data pipeline including DevOps and DataOps and building GenAI solutions

+ Read More
avatar
Anass Bensrhir
Associate Partner @ QuantumBlack

Anass Leads QuantumBlack in Africa, he specializes in the Financial sector and helps organizations deliver successful large Data transformation programs.

+ Read More
avatar
Demetrios Brinkmann
Chief Happiness Engineer @ MLOps Community

At the moment Demetrios is immersing himself in Machine Learning by interviewing experts from around the world in the weekly MLOps.community meetups. Demetrios is constantly learning and engaging in new activities to get uncomfortable and learn from his mistakes. He tries to bring creativity into every aspect of his life, whether that be analyzing the best paths forward, overcoming obstacles, or building lego houses with his daughter.

+ Read More
SUMMARY

Generative AI is poised to bring impact across all industries and business functions across industries

While many companies pilot GenAI, only a few have deployed GenAI use cases, e.g., retailers are producing videos to answer common customer questions using ChatGPT. A majority of organizations are facing challenges to industrialize and scale, with data being one of the biggest inhibitors.

Organizations need to strengthen their data foundations given that among leading organizations, 72% noted managing data among the top challenges preventing them from scaling impact. Furthermore, leaders noticed that +31% of their staff's time is spent on non-value-added tasks due to poor data quality and availability issues.

+ Read More
TRANSCRIPT

Anass Bensrhi [00:00:00]: Hello, my name is Anass. I'm associate partner at QuantumBlack. I'm speaking about black. I like my coffee and I drink it three times a day, in the morning, late in, after lunch and espresso, and at night I drip coffee. And I'm a very big coffee enthusiast.

Anu Arora [00:00:24]: Hello, Anu Arora. I'm a principal data engineer with QuantumBlack, and that's how I take my green tea.

Demetrios [00:00:35]: Welcome back to the Mlops community podcast. I am your host, Demetrios, and today we are talking with Anu and Annas about how data engineering is still hard. That is right, folks. Despite the text to SQL models that have come out, it is not any easier. And that is what we get into today. Specifically, we go through some practical and understandable examples of how these folks are seeing Gen AI being used in the wild. But the challenges that they are encountering when they are seeing these use cases go from MVP to actual in production. You know, we love that in production.

Demetrios [00:01:27]: I mean, there were some incredible points that were hit on here in this conversation. The key example that I loved was when they spoke about how ETL is now something a little bit different. The Gen AI or ETL of unstructured data is still very much a work in progress. And particularly when you have to look at things like data quality. Data quality. When you're grabbing a PDF and you're trying to just get that data out of the PDF, how can you measure that data quality? That's much different than if you're tracking event like an event capture by hitting a button on a website. Right? So that was fascinating. The other piece to that is when you are dealing with something like a use case, like a rag, how do you update documentation? How do you update your rag so that the documents are saying the most up to date information possible, and you make sure that that up to date information is what is used in your rag.

Demetrios [00:02:43]: So you can think about something as trivial as saying, like an HR policy has been updated from saying you've gone from a european vacation policy of 30 days a year to an american vacation policy of two days a year. And those two days are expected to not be without slack or email on. And you have your little rag going with your HR chatbot, and your HR chatbot is grabbing from the european policy. It's not grabbing from the american policy. What do you got to do? Well, turns out you thought you deleted the documentation where it had the different policy on it, but you didn't delete it everywhere. And because of that, it just makes this vector database that you're using an absolute landmine, because the LLM will grab or it will get information that is potentially outdated. So keeping all of this stuff up to date is incredibly important and not as simple as we would think. There was all kinds of other gems in there.

Demetrios [00:03:58]: I'm going to let you all listen, figure it out for yourselves. If you enjoy this episode, leave a review. It would mean the world to me. And tell one friend, we've got to give a huge shout out to our partners. QuantumBlack in this. Thank you so much for being part of the team, helping the mlops community be what it is. Let's get into this conversation. Now the fun part begins.

Demetrios [00:04:31]: We wanted to get into this conversation, I know, because there's a lot of talk about llms and how useful they are and all that good stuff, and it feels like the data engineers have been left behind. They've been forgotten about. But in all reality, in my eyes, I think the data engineers are more of the hero than ever before. And so I want to discuss a little bit more about the data engineering landscape now that we have the llms out and there's this LLM craze and just, we're now calling it AI. It's not ML anymore, it's all AI. We kind of shifted over to that, which I think is kind of funny. But today with us to talk about this, we've got Anu and Anas, and we should start with a bit of background. Anu, can you kick it off? I know you mentioned you've been working at Quantum Black for the last seven years and you've been a data engineer for 13, so give us a bit of a breakdown.

Demetrios [00:05:36]: How'd you get into tech?

Anu Arora [00:05:38]: Hi, everyone. My name is Anu, so I'm based on London tech and data. Right. Mostly my role revolves around helping clients in building their scalable data pipelines altogether and capability building and defining their data architecture. It's been close to seven years here, so I really enjoy working with data and solving the problems that are there with the data itself. Like you were talking about, right? Llm. Everybody talks about LLM. Not many people talk about data, and that's where you give a competitive edge overall.

Anu Arora [00:06:11]: I started my journey as a net developer long back, but I moved into this data career when everybody was talking about Hadoop and big data long back, and then that's why my journey has started overall, and I'm going to be more than happy to talk about this.

Demetrios [00:06:25]: Yes, it feels like you're prime and ripe for this conversation. And what about you, Anass? How'd you get into this whole field?

Anass Bensrhi [00:06:35]: Oh look, I think it started probably twelve years ago. It has always started with data. My major was in data and AI, and I've been doing data all my life. So I think the journey was on data. I think it were different industries, but I think what I'm doing now at quantum black, I'm concentrated more on the financial institutions. So what I do is that I help banks and insurance get into AI and gen AI and basically put products in production. Right. I think one aspect of it is how to help the enterprise tackle all the data engineering with all the different architecture and technologies and different maturity levels.

Anass Bensrhi [00:07:22]: Very happy to be here today. I think subject is very close to my heart also.

Demetrios [00:07:28]: All right folks, so we know we've got these llms out there. And before we get into how llms are affecting data engineering, let's talk a little bit about how data looks and how data engineering looks in this new paradigm and this new shift that we have. And maybe, Anas, you can kick us off.

Anass Bensrhi [00:07:53]: No, very good. So I think, as you know, data engineering as a concept or is a subject, it's very important to basically get a data and give it to a model and then get insights from the model. It always has been important from the normal reporting side to the AI, when we have prediction models and all of it. And now I think it's way more important now with Jet AI, because previously we were touching on structured data and we had solved it. We found solutions to measure the quality, to build pipelines, and basically to do lineage on the data and to have predictability about the output. Now, I think with Geni, I think data quality is way more important because we are basically touching and use ill structured data. I think also the output is also touching a lot of potential users in the end. So I think data engineering has been important, it's still very important now.

Anass Bensrhi [00:09:05]: And I'm happy to say also that I think now data engineering is also being empowered by AI. So it's not that it is important now, more important, but I think now it's also that a lot of our job, like a couple of years back now, it's been very easy or easier using Gene AI.

Anu Arora [00:09:31]: Just adding to that as well. Like we were talking about here, right. Overall on this LLM aspect of it and also the UI UX. Right. Let's take a step back when we are talking about data overall, right? If you're talking about one company versus another company, data is something that's going to be a key differentiator for you, which is going to give you a competitive advantage, because that's what something which is owned by you. Other than that, when you talk about LLM or you talk about your UI UX, right, LLM, everybody is using it as of today. UI Ux, you can build your UI, but think of an example of Instagram, Snapchat, or any feature that comes in in one application. You can see where the other companies might take inspiration, and that feature can also pop up in that application.

Anu Arora [00:10:17]: But there's another thing that is very important, which is data. So that can improve the customer experience and engagement for you overall. Like if you're thinking of an example for retail aspect, right. For a retail, you can have a customer, your experience can get more personalized if the data is going to be used there 100%.

Demetrios [00:10:37]: Yeah. One thing that you mentioned in us, which I think is fascinating, is how we're now dealing with so much more unstructured data and how unstructured data all of a sudden, well, for a lot of us, all of a sudden, for others, unstructured data has been my bread and butter for years, right? But for a lot of us, all of a sudden, it's like we need to learn so many new things. When it comes to unstructured data, a lot of people had to learn what a vector database was and how to chunk different things, or how to parse out these, or even just like ingest pdfs. And what does that look like? And I don't know if there's optimal solutions yet out there because it is still such a new field for all of these different pieces. But what do you feel like? Are some of these new paradigms that we've had to learn about because of this rise of unstructured data?

Anass Bensrhi [00:11:37]: No. Very good. So I think instructed data. I've been to the time, like 2010, 2008, where data leak was the new thing, and the promised land of data lake was, hey, you can basically store videos and audios and make sense of it all. But I think until a couple of years back, nobody was putting a new structure data to the data league because we didn't have the tools to understand the video and get insights from it, right? So I think now, I think the data leak is being used at its fullest potential, because then we can store, process and use and structured data. So I think that component has been with the enterprise for the last ten to 15 years, so that there's nothing to change. I think the new things here are all the new technologies, basically that the enterprise is looking into using now. But I think the good news is that they don't have to throw anything away.

Anass Bensrhi [00:12:42]: Everything they have is still very valid, but I think there are some, probably two or three additions they need to add to their stack, right? So I think number one is vector databases, because basically if you want to build a rack, you basically need to store vectors into a database. So I think for the people who basically want to be serious about it, selecting the right vector database and using it is key. I think that other aspect is also integration with llms, because basically there are multiple archetypes offline LLM online, a bit hybrid. So I think integrating with a new component, which is the LLM, is something new. The third bit is a bit the data management that we were doing with structured data. Now, I think the structural change now is that beforehand we are connecting to a table and we would get the insights right, and we would also be able to measure the quality, recency and all of that. Now, imagine that we are connecting to a mail server, we are connecting to document storage, and we are connecting to video storage systems. So how do we get that data, how do we transform it, how do we store it, and how do we measure the quality, right? So for example, on a PDF, how do we measure that the input there is, right, or not, always recent or not.

Anass Bensrhi [00:14:16]: So I think these are the new challenges. Luckily, the data management companies have, I mean, they have also discovered them recently, but I think they quickly designed new solutions, basically like an integrative data catalog that you can ask it questions and then it will generate a SQL query for you. Or basically can use the LLM and measure if the PDF has Py data, right. So I think these are the new things that we see data management companies introducing. And if I ever want to summarize, I think nothing to throw away what dataprise have been investing on, on data management or data engineering tools is still very valid. There are probably three additions that I think my clients are looking at. Number one is vector databases. I think it's key.

Anass Bensrhi [00:15:14]: And the good thing is that we have many solutions off the shelf or on top of databases they had previously never. Two is integration with LLM and how to figure it out, how to make sure that data privacy is there, how to make sure that data ops is there to send data and receive data from the LLM. And the third bit is data management tools like the etos, the data catalog, the data quality tools that were very good at measuring structured data. Now we need to look at how to do the simply with this structured data.

Demetrios [00:15:48]: Yeah, there's something fascinating there that I want to hit on, because actually there was two things that jumped out at me from what you just said, and it's almost like, up until recently, the data lake was where you would put your unstructured data, and it was like almost a place where unstructured data would go to die because you would throw it there and you would think, we need it, we need it, we're going to do something with it. But you wouldn't actually ever do something with it. And then now this came out and it's like, oh, well, actually, maybe we can make use of this now. Let's figure it out. And then the other one, which I think is fascinating to ponder about, is how we have this new paradigm of ETL and the ETL before, where it was like, how many times did someone click on something that isn't as relevant when you're dealing with llMs, it's more, how can we ingest a PDF and make sure that we can look and get the tables correct? And so data quality in that regard is much different than the data quality in the regard of, like, are these in dollars or in euros? Why is it on the table? It says it's in dollars, but really it should be in euros. That's like a different type of data quality than what we're thinking about when we're ingesting a PDF and we're not able to correctly parse it out. So with that in, like, when it comes to quality, how do you think about these things, anu? How do you make sure, how can you assure that the data quality is still there as you're ingesting all of these new forms? Or maybe new is not the right word, because new is hard to say these days, but these different types of data.

Anu Arora [00:17:37]: Yeah, I think that's a very good question. Maybe in continuation to that also, I'll just put in one thing. There's also a misconception about that. You just put PDF in LLM and you'll get an answer. PDF is going to be huge, right? You have to break it down into chunks. People feel like, oh, I've got the data. Just put everything in LLM and you'll get an answer. So there's a lot of preprocessing that needs to happen in the background as well.

Anu Arora [00:17:58]: You break it down into chunks and then otherwise your cost is going to go so high with LLM. Now, coming to your question on quality. Right, again, quality has got a lot of aspect to it, right, when it comes to your data. So when you're doing a preprocessing, you have things in place where you're cross checking. There's no missing values in there. Again, when you've interpreted the data and loaded the data from PDF, you ensure that you have a prompt written, which is going to basically validate the inputs that you have overall, going to ensure that you're not entering or adding any quality. And privacy is kind of going. Also another hand in hand, which is cross checking you're not using any sensitive or PII data in if your data is consistent as well.

Anu Arora [00:18:43]: So there are a lot of tooling available in the market. I'm not going to go there. But again, you can have prompts written in Geni that can help you in that case, and also some other enterprise tooling that is also integrating quality in place so that when you're reading that data, those checks can be also applied.

Anass Bensrhi [00:19:03]: Now, I think also, if you look at data quality, right. So now what's important is that you need to look at the data quality as an input, as a prompt, and also as output, right. So I'm not going to talk about the news, but I think every week I'm hearing about chatbots saying things that are not correct.

Demetrios [00:19:29]: Chatbot.

Anass Bensrhi [00:19:30]: Yeah. So I think there are many examples. I think this week we heard a good story about challenges that a company has gone through. So I think here we need to be a bit holistic about data quality. We need to look at the input. We need to look at the prompt. We need to look at the output. And the output, as I said, there are hallucinations.

Anass Bensrhi [00:19:50]: There's also toxicity, a lot of new concepts. Right. I think also just for the data quality. So I think this is a very important subject because I can give any specific example. So, one of my clients is in insurance. Basically, they were very happy because what they have done is that it's a client insurance business. So basically, with big contracts, with business contracts, they have contracts written in text with some covers and some details about it. So typically what they were looking for to do is with an LLM, basically, we have a claim, and the LLM will look at the claim and will check whether it's covered by the policy or not.

Anass Bensrhi [00:20:34]: Right. So, to help a bit, the insurance agent to make a quick decision, because sometimes this contract can be hundreds of pages, depending on the size of the business and everything. So they did a PoC. Very happy. It works fine. They have checked their value stick, and they're happy, let's go to production. Right? But then I think when we were looking at the production, basically the question was, hey, these contracts, they are basically scanned and put into a document storage, right? How do we make sure that that contract is the latest? Because basically, if we are helping an agent, instead of reading 100 pages and figuring out what to do, what if we upload reusing the previous version, like the two years version, where that cover wasn't there? The implications are huge. Right.

Anass Bensrhi [00:21:29]: So I think then we set back on the client and figure out how to make sure that the input data is recent, is correct, and is specific to that particular customer. Right. So I think data quality, once the wave of. Let's do a PUc, let's check the value of a use case. Now it's getting more serious to take that PUC and take it to production. And I think, as I said, we need to look at the three layers of the three steps. Input, the prompts, where we have the chat history, where we have basically the context set in. And also the output is key to make sure that most of the production is a bit secure.

Anu Arora [00:22:11]: Yeah, I agree. And I think because of these issues, data engineering is not going anywhere. I think it's going to get more important now.

Demetrios [00:22:18]: Yeah, I agree with you 100% on that one. That feels like what every data engineer knows, because if it were going somewhere, I think the data engineers workloads would be lightened quite a bit. But if you know data engineers, or you are data engineers, and everybody else out there that knows data engineers knows that it's been the opposite. Every data engineer I know has been working their ass off since these LLMs came out, because it's just like, there's so many different things that you need to start thinking about and knowing about now and being covering for, like, Anas, you're talking about this idea of, all right, cool. We have this mvp that works. But then when you go to production, there's all these small details that you need to be thinking about, where you need to be thinking. How are we continuously updating this so that if anything changes, like somebody upgrades their plan, we make sure that this upgrade gets added to the context. So all of that context, how are we making sure that it is continuously minute by minute or second by second, depending on how real fast we want to make it? How are we making sure that gets up to date?

Anass Bensrhi [00:23:36]: Yeah, I agree with you. I think data engineering is going nowhere. I think it's going to be more and more important. And I think also the data engineers are also trying to adapt with the new business requirements and try to basically use this tools also to help them move quickly, right?

Anu Arora [00:23:58]: Yeah, basically new complex and other challenges.

Demetrios [00:24:01]: Exactly, because now that we have these different areas that we are cognizant of and the data engineers knowing that, okay, well, if this is possible for insurance claims, could it be possible for the work that I'm doing too? And how can I make my life easier? And so I know there has been a lot of speculation on how llms will affect things. Like we've seen a lot of text to SQL models come out and I kind of laugh about those because it's like every data engineer knows that writing the SQL query isn't necessarily the hard part in their job. But I would love to hear what are some areas that you have seen or think that these llms could be used in the data engineering sphere.

Anu Arora [00:24:55]: Maybe I can add from few elements and then Anas you can pitch in. Of course I'll talk about the topics and then of course there are a lot of guardrails around it, which is regulations going to be taking place. But let me start with a couple of things. When it comes to writing the data pipelines altogether, right. You can use Geni to help you with the data pipelines altogether. So which can help in stitching the code altogether? The other things could be unit testing, which is another important criteria where you generate the code altogether. There unit testing can be helpful where GenaI can help. In the third, in my scenarios, I've seen it in synthetic data.

Anu Arora [00:25:35]: So when you need like a synthetic data that needs to go in your integration testing, that mock data can also be generated based on that scenario. I've not used it used it. But it can also help further on classifications of your PII data, which is tagging and also doing the data cataloging altogether. But having said that, you have to put a lot of guardrails around it, there's a IP infringement. You have to be very careful. You can't just copy paste the code and you say, okay, my work is done. And with european AI act also coming up, there are so many regulations that we have to be very careful with. Also when you're using Gen AI.

Anu Arora [00:26:13]: What do you think, Annis?

Anass Bensrhi [00:26:14]: Yeah, no, I agree 100%. So I think this is a great era because I think the terms are also making the decisioner job easier on some aspect. So I think also like ingesting data. So imagine that you have a lot of content and previously, just to give you, for example, for example, bags and issues, we work a lot of scan documents and pdfs. So I think previously to go and get exactly the data that we need in a PDF, you had to draw metrics and pinpoint the pixels on where to get that data. I think it has evolved with time, with natural language processing, but I think now it's as easy, of course, with the wide guideways, as easy as plugging in the PDF with the different methods and ask the EDM to get the insight or summarize or get what the value that you want, right? So I think that's the data ingestion, I think number two, which is that just cataloging, right? So basically enterprise, they have a huge data catalog and in order to navigate it and to figure out what table goes where, what to join, how to join, I've seen a client also experimenting with, instead of, as you said, writing a query in natural language, like say, hey, give me the last ten elements sold in this branch in that area. And basically for new users, for users who don't know how to write the SQL, they need help. And in this support, I think Datalim can help tell you which tables to look at or even help you create the query or get you the value, either of that data.

Anass Bensrhi [00:28:06]: I think also like tagging data or showing you where there's a potential data quality issue. I've seen some general, we have within quantum black a tool that basically help you look at a table or multiple tables and tell you where are the potential data quality issues and to try to help you fix them. I think also the lineage, right? I think also data elements can help you find out where the data is coming from, right? So I think, look, I think it's a pleasure to be live in this era now to see that basically genius can also have its engineers move faster. I don't think PLM can do all what the T engineer does or can basically replace, right? If this is the right word, no, I think it will help support, it will help accelerate and which will help materialize the work of a engineer quicker.

Anu Arora [00:29:16]: And maybe another thing to add here is LLM model did take no for an answer. They always have an answer for you. It can hallucinate and give you some answers that you might not even, which is not even true, maybe the data does not even exist, but it'll give you an answer. So you have to be very careful. You can't just say blindly, you can go ahead, as Eris mentioned, it's not going to replace you. And of course the guardrails and regulations are in place, which LLM can, you can use where the data needs to go? Where is the LLM hosted? They're going to be important elements anywhere.

Demetrios [00:29:48]: Yeah, I was thinking about something along the lines of, but it feels like a lot of times, potentially, you don't necessarily need an LLM for this. And I think these days it's easy to be like, oh, yeah, let's throw some AI at it. And do you need AI for that? Because I was thinking, oh, yeah, well, what about if you could figure out how the data has been transformed and the provenance of the data? And then I was thinking about it a little more and I was like, you don't need AI for that. So I'm going to change gears a little bit, because we were talking about Pii a little bit and tagging that. What do we feel like when it comes to llms and working with llms from the data engineering side, how do you think about PI and data privacy? I know we've spoken a lot on this podcast about how you really have to be conscientious of where you're sending the data, especially if it's just going out to chat GBT and how chat GBT is using that data, even if they say they're not using it. I was just reading a paper actually, a little while ago last week from these folks out of university in Prague, and they were talking about how all of the different large APIs leak the data into some way, shape or form. They're leaking that data. So you can't, even if they say that they're, like, bulletproof, it doesn't matter because it's being leaked, and so you can't really trust it.

Demetrios [00:31:27]: How do you look at data privacy and Pii in this new paradigm of large language models?

Anu Arora [00:31:33]: So in terms of privacy, right. The data that you have within your company, you can control it. So one, I always have this question for my clients whenever we are building applications or anything, right. Do you really need to feed in the actual PII data into your model? And 95% of the times it's no, you can anonymize it. You don't have to send a customer id or a customer number or bank card details. You can anonymize it, feed that information in your model, and that's where you can use it. So I think it's always about before you even use the LLM, right. Rather than throwing all the information to the model itself, take a step back and think, do you really need to feed in the sensitive data in the first place? So majority of the times.

Anu Arora [00:32:14]: What you would typically do is you anonymize the data altogether. So you don't even send that data in that particular sense in the first place. So you do that tokenization, anonymization, hashing or whatever, which is going to be the most secure mechanism. Depending on the type of data that you're going to put in, that's going to be the point. Number one, you don't send the data in the first place and do those guardrails altogether while making a call. But 5% if you still need to send that data in, so and so, make sure where is your LLM hosted? Now you can also have country specific, right? You may be a data from one country, doesn't need to go to an LLM. Think about where that LLM is hosted, where the data is, where it's going to be going, and also ensuring the authorization or the access of that at the end of the day. Like application does not have flexibility that it can download all the data altogether.

Anu Arora [00:33:06]: So you also take care of that authorization aspect, which we have not talked about yet, but access management to ensure that only a certain set of people have access to that set of data if required.

Anass Bensrhi [00:33:17]: No, I think also whenever there is something new, a lot of people are worried about it. Right. So I think with the recent McKinsey research that we have done, a lot of the companies we have discussed with are thinking like, more than 70% are thinking that gene has introduced or will introduce some data risk. Right? So that needs to be considered. Right. So I think this is the same story all over again. Like when we had cloud and everybody was saying, hey, how can I be sure that my data is going to be seen by somebody else, or my data is not leaving, or is it secure? So I think it's a maturity level that enterprise will basically give a trust, or trust that process. But I think going there, there are a lot of different archetypes or different solutions that at least my clients are looking at, especially for banking.

Anass Bensrhi [00:34:21]: Right? So I think, as Anu said, there's one, we don't really need to send anonymized data to the LLM, because basically you can put all our property data into a vector database that is hosted within our premises, and we use the gen AI LLM just to interact and retrieve that data. And even with the retrieval of that data, we can make sure that on the vector database, we don't have any pY. Right. And we use the context, or we use the client information only on the front end. So that's I think one of the solutions, anonymous data number two, there's potentially also hosted lnms. I think now with the introduction of mixture and Lama and many offline llms, a lot of enterprises are testing. Of course, I think there's always the question about GPU or the accuracy versus the leaders on the market. But for some applications, I think there's no question that the data will leave the premises of the company.

Anass Bensrhi [00:35:28]: And I think there are also this, I think the third bit, which is a bit the hybrid approach to, as I said previously, which like the LLM is for just interaction, but the data is hosted 100% within the premises of the enterprise. Right. So I think there's different archetypes, many players, artists and different things. But I agree with you that it's a maturity level. I think as we have gone through this with the cloud, where nobody has trusted to send data elsewhere, now it's more like everybody uses the cloud. Right. I think it's a maturity level we need to get into with time.

Demetrios [00:36:13]: Yeah, that's an interesting point where you draw the parallel between the cloud. And I can see how back in the day, people were very afraid of saying like, wait a minute, I'm going to go and I'm going to take everything that I own and just give it to somebody else. And you're going to assure me that it's all going to be okay. And what kind of assurances can you make that it's just going to be okay? You're just going to give me your word on that one? Okay. Yeah, and I'm going to go and spend a lot of money to do that. Let me see about that. So there does feel like there are some parallels. It feels a little bit different though, because we don't really know what the llms are doing.

Demetrios [00:36:56]: We don't really understand what's going on. Yeah, it's a little bit more like, okay, OpenAI can tell me that they're going to do everything in their power to make sure that nothing happens. Right. But then until we fully understand the idea of what is actually happening in the LLM, and if anything is going on behind the scenes, then I feel like people will still be more hesitant. And it is more of that maturity level that you're talking about in us. Over the next couple of years, we're going to be able to see where things fall short, and then we're going to be able to create whatever we need to create to make sure that everyone is very confident and we can put in our little sock two compliance. That even though I'm using this API, everything is still all good, I still conform to my sock two, and it's all kosher, right? So thinking about this whole idea of how do we move fast, how do we take advantage of this technology that's out there, but at the same time, how do we not shoot ourselves in the foot and be one of those companies that is in the news for having a gaffe or selling a car for one dollars or talking bad about its own company, all those different ones. Anas that I know you, it's funny, you don't even have to tell me which one you are referencing.

Demetrios [00:38:31]: I can just think of three or four, because every week a new one comes up and it's like, oh yeah, those poor folks, they just tried to run fast and it ended up shooting themselves in the foot. So how do you all think about balancing that innovation and trying to make things work with these llms, but at the same time having this data protection and having these guardrails in place, and maybe it's that you're thinking, okay, well, maybe for very high stakes stuff, it's off limits. Or maybe we try and just do things internally in the beginning, and until we see that we have enough reps internally, then we can bring it to the external world. Even though as soon as you put something external, you know, it's going to be like, there's so many more malicious actors just because they want to have something to post on Twitter for no other reason than that they want to be like, look at what this chatbot told me. So how do you all think about that balance right there?

Anass Bensrhi [00:39:37]: Look, I think there are different ways you can do that. So I can talk about examples of some companies and what they have done. So I think one approach was to say, hey, this is a new technology, this is moving really fast, and we don't know how much it's going to cost us down the line. We don't know how much the value will it be? So let's do one thing. Let's take the use cases, let's prioritize them by value. So what will be the impact? By feasibility. With feasibility, we have also the risk. Right.

Anass Bensrhi [00:40:13]: And I think when you do that exercise, most of the time, at least some of the banking clients I'm working with, we try to do something either internally, right? So let's say it's an HR chat box that we use to ask HR questions like, how many days do I have leave a year? And you limit a bit the intent, right. You start with the intent, you limit it and then you expand it, but with a control environment. I think the other way to do it is basically to launch a product with a limited set of users, just a bit, fail fast and then build upon it until. Because I think there are probably three things you need to guarantee. Number one is that you basically know that this will provide value to you. So number one, you need to validate the value. Two, you need to validate that you master the technology and you can basically specify the intent, how the guardrails, and make sure that the output is clean. The top bit is also to master the costs, because I think a lot of times we say that Genai will help reduce a bit the cost of doing X, Y and Z, but I think a lot of companies need to prove that, right.

Anass Bensrhi [00:41:42]: So I think especially the bank, it I'm working with, even though many banks have launched some chat box. But if you go there, you can see that it's either they started previously with the eternal beasts and they violated the technology and the risks. Two, they have launched this previously, but with a limited set of customers, so they can test, or potentially, or potentially what they have done now is that they really limited the intents of what they launch online to some specific actions. Right. But of course, I think one important piece is anyhow, other than putting the guardrails, is that you need to add some human touch in the end to validate a little bit some of the outputs, right. Because I don't think everything should go out and should be autonomous, right. I think there should be a lot of control over the output, at least for the maturity level we are in within this couple of years.

Anu Arora [00:42:55]: Yeah. Adding to the points that Anastre just mentioned, right. Like summarizing in the context of these three things, right, your privacy, quality and access. This is going to be very important, guardrails that you have to apply. Things are still evolving. I'm not saying we are there there yet, but still these three components, depending on the risk factor of your application, you make sure that your quality, privacy and access is taken care of. I think that's where you will help. That will as a client or as a customer, it will help a little more in going in that journey.

Anu Arora [00:43:28]: Other thing, of course, is the country regulations we are coming up like European AI act, right? Other countries are also going to come up with those regulations, which is going to drill down on the list of things that an application need to be careful with. That is going to be similar to what we had for GDPR earlier. Right. On the AI applications, that's going to be also the guardrails that will be enforced at a country level. But as of now, if you don't have those things, make sure that based on how the risk factor of your application is, you take care of the privacy access and overall, these three components that I talked about.

Demetrios [00:44:05]: Yeah, it feels like there's almost, so I really like that. It's like a matrix of the risk factors, cost and how easy it will be to implement so that you can recognize, should we be putting this or should we be spending our time on this, first of all? And next up, if we do decide to spend our time on it, what's the likelihood of success here? And then also having those feature flags in place so that you can quickly test it with a small group. And if it gets out of hand with a small batch size of 100 people, you can recognize that definitely not going to be showing this to 100,000 people. Let's go back and see. And then limiting that ability of what it can do and putting those guardrails on the front end. So how the person that's interacting with that, in this case, the HR chatbot, how that person can speak to the HR chatbot, and then what the output of the HR chatbot is too. So you can't say, hey, give me my holidays package in a poem in the style, know, Bob Dylan or something, which would be great, but it's, we need, do we need that right now? The other piece that I thought was interesting that you mentioned to us, and I'd love to dig into this a little bit more, is like, people feel like when you implement llms, it's going to reduce the cost a lot of the time, but it's not always a given, that thing, the cost will go down. So can you explain that a bit more?

Anass Bensrhi [00:45:44]: No. Very good. So I think a while to say by that, is that basically for each use case, and we have seen a lot of use cases using AI, you need to understand, what is the lever you are trying to optimize? Is it basically try to serve millions of clients? Is it to retrieve information quickly or is it something else? What is the lever you are trying to optimize? Basically, for each of this use case, we need to get a dollar value, right? Because each of this, and I'm talking here about the enterprise, about the bank, about the insurance, in order to get this into production and answering the questions about privacy regulations, all of it is going to cost money, right? So I think before you get into something that costs a lot of money. You need to understand what's the value of each use case, what is the feasibility, and then to take what has the highest value and highly feasible with less risk. Right? And you start with that. You try to pilot it, you launch it, you get the value out of it, and then you will basically master the cost. You understand how much is costing you? Because I think a lot of people now are looking at the bill they receive from Julianne asking questions. Two, we also need your people to be trained on how to build the use cases.

Anass Bensrhi [00:47:17]: So I think within that, you guarantee that you are limiting the risk. You are getting value and at the same time learning how to do more complex, more high stake use cases. So I think for me, this is critical because I think I always like to go back as a model. I like to go back into history. I like to see where companies have failed in the past, where they say, hey, we have this machine learning, okay, machine learning. Let's build a model that's going to do X, Y and Z. And then they go, they build it. There is no uptake from business because where's the value, right? So I think now with the hype, a lot of companies are trying to build new things.

Anass Bensrhi [00:48:01]: So, for example, on that example, I was telling you about a company trying to build a layer that's going to generate a SQL query for you, understanding the data catalog and the data. I think the company has spent a lot of money to do it. But now data management companies are offering that on top of their database. On top of their databases, right? So now it's becoming. So you invest a lot of money, but a lot of companies are offering that off the shelf, right? So I think starting, if a company wants to start tomorrow, they need to look at all the ladder lists of use cases. What is the lever? Try to put a dollar value into each one. A feasibility, prioritize biometrics and then go ahead by the highest value, high feasibility. Right.

Anass Bensrhi [00:48:53]: Test it, see the value of it, see if there's any uptake from business, and then scale from there.

Demetrios [00:49:00]: God, that's so right, man. And that's so hard to do on that strategy part to think about. I could spend a lot of time and effort creating this because it's not on the market right now, or I could wait six months and see if it's going to be on the market because I'm probably not the only one thinking about this. And so that question right there, it eats at me so much considering that you have to have this crystal ball where you have to recognize, like, okay, we could go and spend a lot of time and energy in this, or we could just wait. And what are you going to just wait on? And then in six months, what if it's still not there? And you say, you know what? We waited six months. It's still not here. And then in another six months, it is there. And so you're like, da.

Demetrios [00:49:58]: So that is so hard. So anybody out there that's making those kind of decisions, I have a lot of respect for them.

Anu Arora [00:50:05]: Yeah. I heard one example. There was somewhere I read where people were talking about, if I can just message chat, GPT and a video gets created out of it. So a lot of ideas are being thrown like that in the market, which is like, I just explain, and then I get a complete video for three minutes, four minutes on the topic.

Anass Bensrhi [00:50:23]: I think this has been lost a couple of days ago, right?

Anu Arora [00:50:25]: Is it?

Anass Bensrhi [00:50:26]: Yeah.

Demetrios [00:50:27]: See the things I'm looking at? Yeah, exactly. Yeah. Soon enough, that'll be a reality. And that's so wild to think about. So wild. So this is incredible. Well, folks, I am so thankful that you came on here and you were chatting with me all about the data engineering aspects of this LLM world. And it's always a pleasure to get to catch up with smart people like yourselves.

Anass Bensrhi [00:50:53]: Thank you very much.

Anu Arora [00:50:54]: Demetrios, always a pleasure.

+ Read More

Watch More

Synthetic Data for Robust LLM Application Evaluation
Posted Oct 24, 2023 | Views 631
# Synthetic Data
# Application Evaluation
# ExplodingGradients
Building Effective Products with GenAI
Posted Nov 02, 2023 | Views 505
# Effective Products
# Generative AI
# LinkedIn