MLOps Community
+00:00 GMT
Sign in or Join the community to continue

The Future of AI Operations: Insights from PwC AI Managed Services

Posted Nov 14, 2025 | Views 18
# AI Managed Services
# Data Analytics
# PwC
Share

speakers

user's Avatar
Rani Radhakrishnan
Principal, Technology Managed Services - AI, Data Analytics @ PwC US

Rani Radhakrishnan, a Principal at PwC, currently leads the AI Managed Services and Data & Insight teams in PwC US Technology Managed Services.

Rani excels at transforming data into strategic insights, driving informed decision-making, and delivering innovative solutions. Her leadership is marked by a deep understanding of emerging technologies and a commitment to leveraging them for business growth.

Rani’s ability to align and deliver AI solutions with organizational outcomes has established her as a thought leader in the industry.

Her passion for applying technology to solve tough business challenges and dedication to excellence continue to inspire her teams and help drive success for her clients in the rapidly evolving AI landscape.

+ Read More
user's Avatar
Demetrios Brinkmann
Chief Happiness Engineer @ MLOps Community

At the moment Demetrios is immersing himself in Machine Learning by interviewing experts from around the world in the weekly MLOps.community meetups. Demetrios is constantly learning and engaging in new activities to get uncomfortable and learn from his mistakes. He tries to bring creativity into every aspect of his life, whether that be analyzing the best paths forward, overcoming obstacles, or building lego houses with his daughter.

+ Read More

SUMMARY

In today’s data-driven IT landscape, managing ML lifecycles and operations is converging. On this podcast, we’ll explore how end-to-end ML lifecycle practices extend to proactive, automation-driven IT operations.

We'll discuss key MLOps concepts—CI/CD pipelines, feature stores, model monitoring—and how they power anomaly detection, event correlation, and automated remediation.

+ Read More

TRANSCRIPT

Demetrios [00:00:00]: We have a very special episode. I have the opportunity to hang out here in the PwC offices in San Francisco for my trip and I'm here with Rani. We're going to talk all about what she's been up to and how you're looking at MLOPS, AIOps agents, everything of this sort. We should probably start with a little bit of who you are. Sure.

Rani Radhakrishnan [00:00:31]: Thank you, Demetrios, and excited to be talking to you. I've watched a number of episodes of your podcast and very excited to be.

Demetrios [00:00:39]: Involved by your own will. I imagine you were forced.

Rani Radhakrishnan [00:00:43]: Absolutely not.

Demetrios [00:00:44]: And decided to torture you.

Rani Radhakrishnan [00:00:45]: It's 100% of my own free will, but. And so lovely to be here talking to you. So, a little bit about myself. I have about three decades of experience in the consulting industry, and over the last 20 years I've been with PwC. Today I'm a partner with PwC and I lead our data analytics and AI managed services space. So with PwC, I spent the majority of my career in healthcare in payers and providers. And then over the past five years, I decided to make a shift back into technology. And it was a very deliberate shift where I wanted to move into the managed services space.

Rani Radhakrishnan [00:01:30]: And the way PwC approaches managed services is we want to lead with automation and AI. So that excited me because managed services, or Keep the Lights on, is a space that consumes over a large majority of a CIO's time and resources and all of that, and having the opportunity to shape that to be more efficient excited me.

Demetrios [00:01:54]: For the uninitiated, what are managed services?

Rani Radhakrishnan [00:01:57]: Managed services is the space where we take over support of anything that moves into production. So it's. Some people call it Keep the Lights On. It is as soon as somebody works on a project, it's live with consumers. All of the support related to that software asset, whether that is the hardware, the software, the experience. Right. All of that falls into the managed services space. And the area that my team focuses on is data analytics and AI.

Rani Radhakrishnan [00:02:30]: And I think AI managed services in itself is one of the coolest things that is happening today.

Demetrios [00:02:37]: Yeah. You have your work cut out for you.

Rani Radhakrishnan [00:02:40]: Yes.

Demetrios [00:02:40]: Keeping things in production and making sure that they're going off without a hitch. I imagine there's a lot there.

Rani Radhakrishnan [00:02:48]: Definitely, definitely. It's an exciting time to be in this space. A because I think AI is an area that has been growing exponentially, and at least in my lifetime, it is the area that I've seen grow the fastest. And so not only are new technologies tools being introduced. Also the pace at which innovation is happening, which is the applicability of AI, that is also something that I've never before seen. Right. The pace at which things are growing. I would say 12 months ago I was in conversations with clients where they were talking about investing in AI and experimenting, piloting in AI.

Rani Radhakrishnan [00:03:32]: Today, fast forward, even in the last six months, I would say I'm seeing clients who have implemented AI and are now looking, you are thinking about roi, how to scale it, how to sustain it, which brings in the managed services component of scaling, sustaining, and so on and so forth.

Demetrios [00:03:50]: Yeah, it's so funny that you mentioned that because I think we went through the exploration phase. Now it is in the phase of okay, so what can this actually do? And how can this, how can I prove that it is better for my company if I'm using AI? I'm sure you've saw, you've seen the recent paper that came out that was saying people using AI think that they're doing things faster, but in reality they're doing it like 20% slower. But when they're asked, hey, did AI help you? It's like, oh yeah, it was way better. And we all instinctively are probably a little bit like that where if we're using AI in some way, shape or form, we feel like, oh yeah, this is better. But then if you were to compare that with us not using it, I often wonder and I ask myself sometimes like, yeah, is it, is it better or am I just fooling myself? And a friend of mine told me, yeah, dude, look, the experience of chatting back and forth and getting, being able to say like, okay, now go off and do that and you know, write me. That thing that we've been talking about is so much nicer than having to really like squeeze every one of those words out of your mind and all that. But sorry, I digress. I went on a bit of a tangent.

Rani Radhakrishnan [00:05:14]: It's all good. Yeah, I think it's probably a two part conversation. Right? There's a personal growth, a personal productivity, which is what you're talking about. And then there is the enterprise growth, enterprise productivity. Right. On the other hand too. And you've got to like take both of these together. And one of the recent studies that I saw said that if you're not spending say 4 to 8 hours a week or x percent of your time learning and studying and upskilling, then you're not going to be AI enabled.

Rani Radhakrishnan [00:05:49]: And I think that is true for personal growth, but also for organizations who are looking to upskill their employees. There is a huge investment in terms of training and giving people access to the tools and technology so that they know how to be productive with all of these tools. And I think there is a whole change management piece that goes along with AI enablement, which if you undermine that effort, then it's never going to be successful.

Demetrios [00:06:17]: Yeah, it's a cultural thing.

Rani Radhakrishnan [00:06:19]: It's a cultural thing 100%. Yeah.

Demetrios [00:06:21]: So ground us in the current state of what you've been seeing out there with IT operations, how you see MLOps.

Rani Radhakrishnan [00:06:29]: Maybe let me start with what is the current state of organizations? Right, there's tons of data out there. You know, most organizations I know have alerting systems. So there's lots and lots and lots of alerts that if a human being has to sift through all of it, a lot of it is noise. And I'll give you maybe a couple of examples. So incident detection today is, you know, it is. So if you get 100 alerts, which one is truly an incident, which one is not, is something that if you have to go through 100, then you're losing time on it. Whereas if you train a model to look for these, you know, this criteria, it's going to pop up, hey, Here are the 10% of alerts which we think you need to look at. And then triaging becomes a lot more easier.

Rani Radhakrishnan [00:07:15]: You're bringing down the meantime to resolution just by doing that. Another example is when it comes to infrastructure like the say CPU utilization, resource utilization sometimes tends to be cyclical. And if you are actually monitoring for certain events, you know that the usage is going to spike six hours from now or 20 minutes from now. If you had a system that was looking for these things, then you would know, hey, I need to add more memory to the server, otherwise this is going to crash in like two hours. So this is what I would call proactive monitoring, where you're preventing an incident from happening because of how you're effectively using your data. So those are examples of how I think MLOPs and AIOps can be combined to create a very compelling case for IT operations.

Demetrios [00:08:10]: Yeah, the predictive monitoring is incredible. I really like that. And then also alert fatigue is real.

Rani Radhakrishnan [00:08:17]: It is real.

Demetrios [00:08:18]: What about agents and where are you seeing them? What have you been doing with agents? Because that's like the buzzword of the last year, year and a half.

Rani Radhakrishnan [00:08:27]: Yeah, I think the concept of agents is very real. And we are starting to see, I think some organizations adopting agents first and foremost. There's a lot of process standardization work that needs to. If everyone is following a different process, then it's very hard to put in an agent to replace or to do a process. Right. So there's process standardization. The second piece is data quality. Right.

Rani Radhakrishnan [00:08:53]: I do not know of a single organization that says that I have perfect data. The quality is great. And so for these two things, I do see a lot of data modernization, app modernization, type of initiatives that organizations are taking on. And so once you have this foundation set, then it becomes a lot easier to go and say, well, these are my standardized processes and these processes get used 75% of the time. So I would get the most bang for my buck if I were to agentify these processes. Right. And I am oversimplifying it as I'm kind of walking through this. Right.

Rani Radhakrishnan [00:09:36]: But the other piece of it is I often get asked, should we start with one agent, one process, and then look at how that goes, Experiment with it, build a pilot and then scale it, or do we do multiple. And so I've done a lot of reading on this and thoughts on this and what we've seen work is I think centralizing the management of agents is important because you do not want different people just using different technology within the same organization to build their own agents. So there's some kind of centralization that's required that also then advocates for responsible AI and all those ethical considerations. But then it is okay to start parallel development of agents if there are processes that are being used 75% of the time in four different departments, start writing up like four agents. And today with all the IDs and the low code, no code solutions, it is so easy to spin up an agent. So I think the majority of the time needs to be spent designing the agent, then a little bit of time writing the agent, and then the rest of the time in evaluating so the evals and measuring the impact of the agent, looking for hallucination, looking for biases. That's where then the rest of the time gets spent. Right.

Rani Radhakrishnan [00:11:02]: So this is how we see a lot of clients embracing agent to close. And I think this is still pretty nascent. Right. There's not a lot of, you know, I think organizations who have agents, too many agents in production. Right. I think we are still in that, like I'm building an agent, I'm evaluating it.

Demetrios [00:11:22]: Yeah, yeah. It reminds me a little bit of the ML Ops days, early MLOps days when I started the community in 2020, where a lot of folks were trying to get the models into production still. And it's just really hard to make something A, that you can clearly attribute ROI to and then B, something that actually like works.

Rani Radhakrishnan [00:11:49]: That's right, yeah.

Demetrios [00:11:50]: And my buddy Zach was just telling me this morning, one thing they see is if folks can get an agent on these standardized processes into production quickly, then they start to see where the long tail of their data or their knowledge base is missing. And so because you run into problems constantly and then you have to amplify or you have to really enrich your data and it's almost like a way for you to recognize where your faulty.

Rani Radhakrishnan [00:12:28]: Yeah, I think it is interesting how we build upon these things and yes, you get your agent into production and that's where I think that's what makes managed services so exciting for me because these things are now in production and then we are tuning, optimizing, training it. So managed services for AI takes on a different meaning than your traditional managed services. Traditional managed services is all about breakfast incident management. Make sure the system's up and running. Here we are not only doing that, but we are also making sure that we are delivering on the right outcomes that it was intended to deliver to begin with. And that means that we are constantly measuring. And so even the skillset of the people that we have in AI managed services is very different from your traditional managed services folks. We are hiring MLOps engineers, we are hiring data scientists, we are hiring more advanced analytic folks who can tune and optimize the models to reduce the bias, reduce the hallucinations and so on and so forth and so on that note, we also have a framework called agent OS that PwC has developed for our clients so we can go in.

Rani Radhakrishnan [00:13:51]: It's all based on Python based frameworks and things like that. So it's very interoperable, works with all the hyperscalers, has APIs to connect into any ecosystem and it is an agent framework so you can build one. It's got an agent orchestration layer inside it and I know there's a lot of them in the market, but when we go to clients we do go with a framework and I think that's one of the things we tell the clients that you don't have to build from scratch. There's so much that's already out there you should leverage and then build on top of it. So then the focus becomes the client specific business outcomes versus trying to figure out the technology that you need to make this happen.

Demetrios [00:14:34]: Yeah, that makes sense because as you know from last night, there's a new startup that comes every day, every minute. I think in this space, it's a hot space. And again, there's a lot of unsolved problems. So that makes it ripe for these startups to come up and try and find the best way and the most optimal solutions. And you said something interesting there, where it's different from the traditional managed services because there's not that break fix. And it's. It feels like it is very, like random, but not random in a way. I guess that's not the right word I'm looking for.

Demetrios [00:15:14]: I don't know if. If that makes sense. Like it's. You have to kind of just let it, let it do its thing or break fix is very clear because it's doing this thing and it's not working right. There's so many other things that could be going wrong with an agent that it could be working in that it actually executes, but it's not working because there's all this other stuff that it's doing wrong.

Rani Radhakrishnan [00:15:39]: Yeah, you need a human in the loop until it is proven that this model can work without a human in the loop. And there are probably very few use cases today where someone would be comfortable making that judgment. Right. Then it's interesting, I think when you talk about humans and agents, there are so many. There's jargons that have come out, for lack of a better word, like there's human in the loop, there's human on the loop, and there's human out of the loop. Oh, yeah, it is. You know, how many more ways can we use those words together? Right. The human in the loop is where you have to approve a workflow.

Rani Radhakrishnan [00:16:22]: Right. Approve the outputs that the agents are coming up with. Human on the loop is where you're approving only the exceptions. So you're comfortable that the regular workflow is something that the agent is able to take care of. And outside the loop is outside the loop. Right? So, and I find that very interesting because in highly regulated industries, I think they kind of move or lean towards the human in the loop model where they are approving everything, not just the exceptions and the norms.

Demetrios [00:16:52]: I find that fascinating, especially because the, like, going back to the idea of we create these systems so that they can hopefully give us incredible lift.

Rani Radhakrishnan [00:17:06]: Right.

Demetrios [00:17:07]: But at the end of the day, if we're still in the loop, or is it.

Rani Radhakrishnan [00:17:11]: Is it giving you the lift?

Demetrios [00:17:12]: Is it giving you the lift? And that's probably the, the big question that I imagine a lot of these folks are grappling with, like, how much better are we because we're using this you know, it's there, like, and, and so you can't say, I don't want to do it because, you know, there'. There are companies that are finding success with it. And I draw this parallel back to the traditional predictive ML and how you have gigantic companies that are optimizing their fraud detection by 0.01% and saving millions of dollars. And so other companies see that and they think, well, yeah, we have to do machine learning, because look at that. But then there's all these nuances of like, yeah, are you at that scale, though? Are you, Is that the right use case for you? And these questions come into play, and maybe you can start with something a little bit more simple just to see, like, put your foot in the water, test it out, see if it works, and, and actually see. Start having the conversations of what we need to do to get that into production.

Rani Radhakrishnan [00:18:25]: Yeah, yeah, 100%. And I also think that most organizations have started in their back office playing around with agents and AI and all of that, and that has given them a lot of comfort because it's a more forgiving place to really start. Right? And then we are now starting to see them get into their own core products and services. And I think that's the next wave. And this is going to get very interesting because now you have to deliver on business, business outcomes, and that means there's gonna be business people who are actually gonna take advantage of that natural language processing. And I think this is why vibe coding is like, picking up so much. Right? And that's now the latest that we hear about everywhere, because they've made it. They, as in the technology companies, have made it so easy for just about anyone to write a piece of code because you can say it in English and then you can get code out of it.

Rani Radhakrishnan [00:19:24]: Right? That's, that's really the whole concept behind it.

Demetrios [00:19:27]: And have you seen good ways or ways that you can get alignment from the two sides of the houses? Like, and I know it's normally not so black and white where it's like the tech side and the business side, but maybe where folks are like, you're saying business folks are coming into this and they're starting to have their say, and they also have needs on how it's being built and they play with ChatGPT. So they think like, hey, AI should do this or it should be like that.

Rani Radhakrishnan [00:20:01]: Yeah, I think it's a really good question. And I have seen pockets of this. Right. So as I think mentioned, my background is in healthcare, and I'll take a couple of healthcare examples, right? So one is a function, you know, when you go to a doctor, you get approved for the insurance. And so there is something called a pre authorization that's done for services. The pre auth process, right, is pretty manually intensive. Like it involves calling the insurance company to make sure you're covered and so on and so forth. That is a process that I think has been agentified or AI enabled.

Rani Radhakrishnan [00:20:42]: And it, you know, and so, so that works. But they still need to make sure that if something gets denied by AI, they have a human in the loop to make sure that this denial is appropriate and so on and so forth. So that is an area where this cannot be done by an engineer alone. You have to bring in people from, and I will call this the business to come in and say, yes, these are the right policies, this is the right reason for a denial, and so on and so forth. And when you build it, keeping the business person's requirements in mind and in the forefront, I think you get it right. But if you try to do it from the back end with an engineer trying to come up with a solution based on. Because you see, as an engineer, you see the data flows, you see there is a call from this system to this system and this comes back with this sort of an answer. This is the data flow diagram.

Rani Radhakrishnan [00:21:43]: That's how I think an engineer's mind works. Whereas business person looks at it saying, well, a person comes in with a fever and you're making, you know, asking them if you can get surgery for it, like you're going to deny. And this is exaggerating again, this goes.

Demetrios [00:21:57]: Back to like the latent failures or the silent failures where it's failing, but in ways that the engineer isn't going to see.

Rani Radhakrishnan [00:22:03]: Exactly. So I think that is where, you know, you've got to take the business input to create these agents and that's where it becomes more successful. And so the, the extension to that is EPIC Systems is one of the biggest electronic medical record systems that's used by a lot of the big healthcare providers. They have recently, the past couple of months, they've introduced AI capabilities into their software. And I think organizations are still adopting and they're still in that experimenting, piloting mode where they are trying to figure out if this is something that they want to introduce. But an example is drafting mails to that is a functionality that they have introduced in it. And today people do spend a lot of time drafting those emails and what the system is doing is giving them a draft and then, yes, I think you have to change most of them for tone, for things like that, but it's a start that actually saves a lot of time. I think, honestly, that's a really good example of, you know, usage of AI.

Demetrios [00:23:16]: Yeah. And. Well, especially if you're piping in and this goes to a little bit of like a data engineering type of thing, because I imagine the drafting of the email isn't. Hi, your appointment is this day.

Rani Radhakrishnan [00:23:27]: That's right, yeah.

Demetrios [00:23:28]: It's a lot of random variables that need to be piped in. And so it's not as simple as just having a template.

Rani Radhakrishnan [00:23:36]: That's right, yeah, yeah. And there's contextualization. Right. There's tonality. There's so many things that have to be looked into.

Demetrios [00:23:44]: Yeah, yeah, totally. So when you're working with companies and you get alignment and then something's actually in production, one thing that is always fun, and as a friend of mine would always joke, like, you can never go wrong if at the end of any talk that you see, like at a meetup, you ask the question and say, yeah, but how does it scale? And so whenever I think about scale, I think about that exact phrase. And so I'm going to ask you now, like, how do you scale?

Rani Radhakrishnan [00:24:15]: Yeah, you know, I'm going to use that for your next meetups.

Demetrios [00:24:18]: But I'm sure the presenter is going to be happy because she was telling me in like, oh, there's always one person that asks that, how does it scale? And technically it's a good question because, yeah, you want to know. But also, yeah, right.

Rani Radhakrishnan [00:24:35]: See, I think of scaling in two different ways. So one is, in today's world, the way you go through sdlc, for lack of a better word, it's not waterfall, it's not agile, but the way agents are being developed is you spend some time thinking about what is the use case or what is a process you want to agentify or you want to AI enable. And then you think about, you spend a lot of time in design and then once you have the design ready, the coding really takes very little time. It's the least amount of time. And then the evals and the measurement piece, that takes like the next big chunk of time. But once you move something into production, right, scaling is not a problem. If process A is what was automated or was changed, then the same process is being used in multiple places in the same company. That's an easy way to do it.

Rani Radhakrishnan [00:25:36]: So coding, right. When you think about scaling, the coding piece is not a challenge because of all the tools that is there. But the second piece is how do you scale? If you got one process right, how do you get the next 10 processes right? That is what I think takes more time. So this is where it goes back to having a good handle on the processes within your company and standardizing a lot of the processes. If the processes are standardized, then I think scaling becomes a lot more easier. And going back to my example of pre authentic, a pre auth in the case of a cold or a fever is probably going to look very different from a pre auth from an oncology department. Right. So the process is the same, but the business context is really important.

Rani Radhakrishnan [00:26:29]: So I think those are considerations. Right. To bake in. But I think if you have a good foundation of processes and if you have business weighing in, the coding is not the challenge, it is the adoption. And we went back to culture. Right. The change management piece of it. I think that is where the focus needs to be to scale it.

Rani Radhakrishnan [00:26:48]: And then the third piece I would add is the measurement piece. Right. The evals are so important when it comes to scaling because you don't want to scale something that is giving you different outputs than what you want. So as long as you're able to measure and correct and measure and correct and you have a fairly good model in place, then it becomes a lot easier to replicate.

Demetrios [00:27:13]: I look at evals and I had a friend say, man, we tried a lot of different tools, but we realized that the tooling isn't the bottleneck or the. The what the tools do isn't where the bottleneck is. What the bottleneck is is a lot of this data labeling on, as you were saying earlier, when the business comes and they look at the interactions, right? And they say, wait a minute, why are you giving them a hamburger when they ask for pizza? That's not what they're asking for. And so you have those data labeling issues that take a lot of time, because if you're labeling it with humans and you're labeling it with humans at the company, and you have a lot of data, that can just be a lot of work. And at the end of the day, that's going to give you the best lift. So he, he told me, you know, you can do these evals and you can get this LLM as a judge or LLM as a jury, all this fancy stuff. But what we found works the best is just getting people into an office, having a pizza party, and labeling data together.

Rani Radhakrishnan [00:28:24]: Well, that and also having a good baseline to start with, right? And that was the standardization piece I was talking about. Right. If you know what this yield it before it was agentified, then you know what the increase in productivity is if it's working, all that good stuff. Right, yes, that is. And this is also where, you know, we talked a little bit about measuring the roi. Right. And to me, measuring the ROI is not just a straight, hey, I replaced four humans with four agents, so now I'm gonna take out the cost of the humans. It's not that.

Rani Radhakrishnan [00:28:59]: Right. Because there is a cost to putting these agents in. There is a human in the loop who is reviewing it for evals or whatever. Right. And so there's a cost associated with that. There is a cost of retrieval, storage and all of that in the back end. Right. Which you're not accounting for, which was not there when it was just a human doing the work.

Rani Radhakrishnan [00:29:18]: And you've got to like compute all of this together to get the true roi.

Demetrios [00:29:22]: Yeah, yeah. It reminds me of the build versus buy question where everybody thinks, yeah, I'm not going to buy it because it's too expensive. And then they're like, I'm just going to go hire four engineers to help us maintain it.

Rani Radhakrishnan [00:29:35]: And you're like, yeah, exactly.

Demetrios [00:29:37]: I hope those engineers aren't based in San Francisco. Let's talk for a minute about some of the sustainability considerations that you've thought about.

Rani Radhakrishnan [00:29:47]: Yeah. So I would bucket sustainability into three different buckets. And I think some of it we already talked about a little bit. The first one is environmental, right. So there is the energy use, the carbon emissions, water consumption. Right. Are you picking the right size, where recycled water is being used, that sort of thing, hardware, materials, like E waste, all that stuff. So that's kind of the first chunk of environmental.

Rani Radhakrishnan [00:30:15]: Then there is the economic cost that we talked about a little. Right. So there is the cost of building, the agent, cost of the human reviewing the agent, cost of the storage, the retrieval. And when you don't need a large model, are you running a smaller model? Right. Are you making those selections very deliberately? And that's what I would put into the economic bucket. And then there is a social bucket, which is hugely important. And we talk about responsible AI quite a bit. We talk about the ethics associated with it.

Rani Radhakrishnan [00:30:50]: So that's good. But there's also accessibility, localization, transparency, explainability of the models. There's all those kind of things that I would add into the social bucket. So to make something sustainable. And I think this is what I was saying at the beginning, I think, think we are Just getting into that phase where we are starting to think about what is sustainability, how do you measure the roi? So far it's all been about, I'm so excited about AI and the possibilities of AI. Where do I start? Now people have started, they've deployed a few things and it's like, now how do I scale? But hey, wait a second, how do I sustain this? So I think these are all going to come in into the the picture a lot more, I think over the next three to six months or. Yeah, I'm saying three to six months. I would have said three to six years if you had talked to me a decade back.

Demetrios [00:31:43]: Yeah, yeah, yeah. So who knows? I do like how you are facing this. You're kind of abstracting it away from the weeds of the tech and really trying to think a bit more big picture and say, look, if we want to sustain these efforts, there's certain things that we're going to need to invest in.

Rani Radhakrishnan [00:32:09]: Right.

Demetrios [00:32:10]: And none of those things are like this new hot tech that or framework that just came out. That's not the big picture type of thing.

Rani Radhakrishnan [00:32:19]: Right.

Demetrios [00:32:19]: We need to look at the big picture here is how are we going to make sure that we can justify this?

Rani Radhakrishnan [00:32:28]: Yeah, that's 100% it because we are going to get there and think about all the waves that we've been through over the past couple of decades, whether it's the Internet, whether it is the cloud moving to the cloud. It all came back to even you think about the cloud. The CIOs came back to wait, I'm going to spend a lot of money if I just keep storing all my data in the cloud. Is there a more optimized way of storing the data in the cloud? It kind of of so. And I think that's the next wave with AI as well.

Demetrios [00:33:00]: Yeah. I've already heard folks talking about how they're starting to look at the cost and they're starting to think if we do deploy this to production, as soon as you start thinking enterprise scale for that feature now, you're hurting every time the user says please.

Rani Radhakrishnan [00:33:23]: Yeah. I think efficiency and perfection has a price and I think everybody or every company has to decide what is that. Right. Trade off. Right. Where you want to spend so much money for just so much perfection. And you know, everything doesn't have to be 100%.

Demetrios [00:33:41]: I'm sure you're seeing it on the different use cases where there's more appetite for risk or there's less appetite for risk or there's more appetite to get this into production just because we really need something thing here.

Rani Radhakrishnan [00:33:56]: Yeah.

Demetrios [00:33:57]: But on this side, maybe the processes, they're not fully dialed in and so we can take a little bit longer.

Rani Radhakrishnan [00:34:03]: That's right, yeah. And it's I think the same call or a similar call that a lot of organizations make, even in the Dr. Space. Right. How much do I need and how much is good enough? Right. You can keep building on that and those are massive investments. And I think everywhere it's like the same principles that you've got to apply. I think the skills, skill sets and the skill set you need both to build and maintain from a software engineering lens, I think that's evolved so much.

Rani Radhakrishnan [00:34:34]: Right. As you're saying the low code, no code ides. It almost is like you don't need a full stack developer as much as you needed them. I'm not saying you don't need them at all, but you don't need them as much as you need.

Demetrios [00:34:48]: Yeah, it's like you need them for other stuff. If you're looking at the whole product as a circle, you need them for this part of the circle. And then you can try and get the, the other stakeholders involved in this part and Right. This was always kind of the case. I don't know if you remember it with the ML you needed stakeholders involved, but they couldn't really act. They had to be in the meetings and talk about things and say like, yeah, but we can't do that because of this or that. But you didn't have this idea of well, let's just empower you with a no code solution.

Rani Radhakrishnan [00:35:22]: That's right. Yeah. And that's where I think it's becoming very interesting. Right. So the skills that you need on the build side is less coding and more designing. And once you design the right solution, it's very little time to code. And then you spend a lot of time in evals and measurement and all that. So prompt engineering data scientists who can measure, compare, run models, simulations, that comes into the picture.

Rani Radhakrishnan [00:35:55]: And then on the managed services side, it is again data scientists who can tune the models. The skill set is completely shifting. Right. It's not about coding anymore. When you are deploying agents, which is very interesting. Right. Because that impacts I think the skillset of who, who you hire as your beginner kind of workers to how they grow, what path do they take to grow within the organization. And I think the business knowledge is becoming more and more relevant now.

Rani Radhakrishnan [00:36:31]: You want. Yesterday, I think when I was at your event, I heard about how Product owners can start coding because you don't need to know code. So the product owners who understand the functionality, who have been designing the product, are probably the luckiest. Right. In this whole mix because now they don't need to rely on an engineer to come out with a code they can actually code. Right. And it's oversimplifying it, obviously, but I think I do see that shift.

Demetrios [00:37:01]: Exactly. I've been hearing the term product engineer more and more, and then I also saw the trend of the cpo, cto, where they're the same person. And so it's like, yeah, this is. It's a product person, but they're technical and that's the new cool thing to do. And. But I. I do think that the. The hard part.

Demetrios [00:37:24]: Yeah. Isn't may. Maybe it's not in the code. The hard part is in the making sure that if you have an agent, it has the right context. And so there still is a lot of data engineering that's going on in the background because. So getting that context, that's all moving data around. And who's really good at that? The data engineers.

Rani Radhakrishnan [00:37:45]: Right. That's right. That's right. Yeah. So it's interesting how it's all shifting. And it's also, I think if you raise it up to the C suite, right there is the birth of the Chief AI officer. We had the cio, we had the analytics and digital, and then now we have AI. And I think with AI, right, it's shifting more towards the business.

Rani Radhakrishnan [00:38:09]: It's more of someone who understands the business, who can then advise the organization on what are the right AI investments that's going to then impact the company's productivity and efficiencies and all that good stuff.

Demetrios [00:38:25]: We'll wrap up here. There's a few really cool ideas that I'm taking away from this conversation. One being the idea of thinking about your system from this sustainability. First. I always thought about sustainability. And the other bucket that you put it in, which was like, yeah, the hardware.

Rani Radhakrishnan [00:38:43]: Yeah.

Demetrios [00:38:43]: Try and reduce, reuse, recycle, sustainability, the environmental considerations. Yeah. This is like, we want this AI product to have a long shelf life, so how can we design with that goal in mind? Mind.

Rani Radhakrishnan [00:38:59]: Right.

Demetrios [00:39:00]: And that's probably like the biggest thing that I'm going to be taking away. It's. It is very oversimplified because. Yeah, of course you would think like that. Why wouldn't you? But I. Yeah. I don't know. And it's also easier said than done.

Rani Radhakrishnan [00:39:14]: Right, Right, right.

Demetrios [00:39:15]: As anyone knows, you're working on rag today and then you're working on agents tomorrow. And now next time it's prompt engineering or it's context engineering. And so.

Rani Radhakrishnan [00:39:28]: Yeah, yeah, yeah, you know, if I had to talk about future ready IT operations, right. It is really look at it with a business mindset, right. I think as we look into the future, like making sure you're stabilizing your current state is so important with data modernization, app modernization, process standardization, all those good things, right. They are all, I think very important for a large organization to start with. And I also think that if a company has never dabbled in AI before, start with your back office, like I said, it's more forgiving. And then kind of move slowly into your front office as you build that muscle right within the organization, invest in good training and upskilling for your people sort of thing. Because you're not going to replace people like overnight. You're going to upskill them, you're going to elevate them, you're going to still have a human in the loop and educate them so that it doesn't become a culture of fear, but it's a culture of learning.

Rani Radhakrishnan [00:40:40]: And then there's, I would say start pilots in different departments. So everyone starts getting excited, but with that kind of centralized governance, if you will, so that there's some guardrails that are around everything and take advantage of all the cool technologies that everyone's putting out there. Like the number of AI startups that's out there is just mind boggling. And the names, right. So interesting.

Demetrios [00:41:07]: Also fun.

Rani Radhakrishnan [00:41:08]: Yeah, also fun. And like Agent OS that PwC has, there's a lot of frameworks out there take advantage of those frameworks because a lot of thought has gone into building those frameworks just to make this journey easy. And then last but not the least, I would say measure, measure, measure. Right. That feedback loop is just so important.

+ Read More

Watch More

Founding, Funding, and the Future of MLOps
Posted Jan 02, 2024 | Views 5.6K
# Image Generation
# AI
# Storia AI
DevTools for Language Models: Unlocking the Future of AI-Driven Applications
Posted Apr 11, 2023 | Views 3.6K
# LLM in Production
# Large Language Models
# DevTools
# AI-Driven Applications
# Rungalileo.io
# Snorkel.ai
# Wandb.ai
# Tecton.ai
# Petuum.com
# mckinsey.com/quantumblack
# Wallaroo.ai
# Union.ai
# Redis.com
# Alphasignal.ai
# Bigbraindaily.com
# Turningpost.com
Code of Conduct