Too much lock-in for too little gain: agent frameworks are a dead-end // Valliappa Lakshmanan // Agents in Production 2025
speaker

Lak is an operating executive at an investment firm. He helps management teams in the portfolio employ data and AI-driven innovation to grow their businesses. Prior to this, he was the Director for Data Analytics and AI Solutions on Google Cloud and a Research Scientist at NOAA. He co-founded Google's Advanced Solutions Lab and is the author of several O'Reilly books and Coursera courses. He was elected a Fellow of the American Meteorological Society (the highest honor offered by the AMS) for pioneering machine learning algorithms in severe weather prediction."
SUMMARY
If your goal is to accelerate development of agentic systems without sacrificing production quality, a great choice is to use simple, composable GenAI patterns and off-the-shelf tools for monitoring, logging, and a few other capabilities. In this talk, I'll present an architecture consisting of such patterns that will enable you to build agentic systems in a way that does not lock you into any LLM, cloud, or agent framework. The patterns I talk about are from my GenAI design patterns book which is in early release on O'Reilly's platform.
TRANSCRIPT
Valliappa Lakshmanan [00:00:00]: Very, very, very happy to be here. I'm Lak Lakshmanan. What I'm going to talk about today is how to build agent frameworks using compose composable patterns. So let me start out by saying what it is that I'm solving for, right? The idea is that we want to build a Gentex system. And when you build them, you want to build them in a LLM agnostic way. You want to build them in a location agnostic way. And the reason that you want to do that is because you may have legal reasons to do it.
Valliappa Lakshmanan [00:00:39]: You might need the larger LLMs that are more accurate, are somewhat slower, and you may need to solve for latency. As you build agentic systems, you want them to be LLM and location agnostic. Another thing that you often find is, is that you don't want to have rigid structures because this space is extremely fast moving. I think we had a speaker earlier today talk about how react was this thing that you actually had to build and you had to build tool calling and you have to build chain of thought. But now these are directly in the LLMs themselves. So as you start to build your agentic frameworks, you want to be sure that you are not. You can rip out the things that you're building, but at the same time that you can have control at the point that you need it. Because very often you need to match business workflows and those business workflows may not translate very cleanly to these rigid like sequential flow or a parallel flow, or an orchestrator worker or whatever pattern there is.
Valliappa Lakshmanan [00:01:41]: In an agentic framework you want to be able to do it in a general purpose way. And the most general purpose way is to be able to explicitly code and orchestrate mechanism. You want to be able to promote traceability. And traceability is really important in any agentic system that you build because number one, people will ask for explainable responses, there will be audit trails that you will have to do. You will need to get metrics, metrics on adoption, metrics on correctness, and metrics on a variety of business KPIs. And of course you will need evals. And evals and metrics are slightly different, but they run. But in order to do either one of those, you need to be able to trace every action that your agentic system takes.
Valliappa Lakshmanan [00:02:32]: Also something that you will often find is that you're not going to be building just chatbots. You're going to be building things that are ideally autonomous but are not ready to be autonomous yet. Because Their accuracy isn't high enough. But instead you want to have a learning path. You want to have a learning path where you can have human inputs and the human inputs over time train the system so that it can function more and more autonomously in some of the easier cases. And you want to do it in such a way that you mitigate business and business model risk. I mean, it's all just, just last weekend we had the whole thing that happened with Windsurf and so many different iterations of purchases, etc. You want to be careful about where you're getting your lock in.
Valliappa Lakshmanan [00:03:25]: Lock in is unavoidable. Marriage is a lock in. I'm married. But you've got to basically think about what kind of risks are you willing to handle and what kind of risks you want to be able to mitigate if you can. You want to do all of this in a way that it is production capable. You don't want to build a POC and find yourself struggling to take it to production because your long tail of use cases become extremely hard to do. And you want to do this in a way that it's fast. You want to boost your time to market.
Valliappa Lakshmanan [00:03:59]: The question is, can we do all this? And the answer is yes. And I'm going to talk to you about how the approach that I follow as I build agentic systems that is in this book called Generative AI Design Patterns. It just got into press. O'Reilly started working on it. There's an early release out on the O'Reilly platform. There's about 32 patterns here, some of which are familiar to go from development to testing to evaluation, but at the same time doing it in such a way that it meets all of these criteria that you want. And my cheeky title for this thought is, you know, you don't need agent frameworks. Agent frameworks are too much, too much lock in for too little gain.
Valliappa Lakshmanan [00:04:47]: But I think you will get that as we go along. So the idea here is that we want to accelerate development. So we want to use reuse as much as possible, but we don't want to sacrifice production quality. And you have two schools of thought when you go out and look on. One school of thought, there was a very well read anthropic paper that talked about how the most successful implementations use simple composable patterns rather than complex frameworks. Anthropic goes through. I recommend all of you to go read it. They go through about what they mean by simple composable patterns and how to put them together to get agentic behavior out of your systems.
Valliappa Lakshmanan [00:05:30]: On the other hand, there is a great blog by Sierra where they talk about all of the things that an agent framework could potentially give you. And they say it takes an enormous amount of investment to reliably orchestrate and securely integrate and enforce guardrails and build tooling and maintain agents. Both of these are true. But as developers, as builders, as what we want to do is to balance these two concerns. And the balancing act that I've come to really propose is that you use simple composable patterns. You use commercial off the shelf cogs, monitoring, evaluation and guardrails. But you build your agents in a way that they're aligned to business workflows, aligned the KPIs that you want to do. But you build your systems because of the way because of where LLMs and agents are today.
Valliappa Lakshmanan [00:06:24]: You build them to be continuously learning, making users much more effective and building autonomy as you go along. And you build all of this on infra that is proven to scale on battle tester infra on microservices. Let's talk about how we would do this. Chapter 10 of the book is an example application that's built along these principles. You can find it on GitHub. I'll post a link the GitHub repo. But there's something called a composable app. It basically has a bunch of agents, the evaluations that you do with the user interface, the prompts that go in and some utility packages and so on.
Valliappa Lakshmanan [00:07:06]: The application itself looks something like this. It is a workflow app and it's an education tech. The idea is that someone we want to create a work book that answers a specific question or a specific topic. Let's say that we want to build something on say squaring the circle, right? So we say okay, I want to write something on squaring the circle. And at this point the next step is that the system goes ahead and finds which the best writer is to write something on this topic. And the system proposes or you want to square the circle. Well, I will use a math writer. You can accept this and you can go towards the next.
Valliappa Lakshmanan [00:07:47]: Or the human user can say no, squaring the circle is this thing. It's really a history lesson that we want to do. It's not math. And then they can go ahead and say next I want to write a history topic on squaring the circle. And at this point you get a draft. And notice one thing, each of these things is editable. Someone can go in the person that's the expert that's going through can go ahead and edit these things. They can edit the keywords that are created.
Valliappa Lakshmanan [00:08:17]: They could even go ahead and modify the draft. For example, they could say here, make the text into bullet points. They could basically specify that change to happen at this point the text has been changed. They might say, well, I don't want to have the compass and I don't want to have. It's not really a geometry thing, but those are the keywords. And say, okay, now it goes to a panel. And now this is a much more complex, agentic thing because a panel in this case consists of a bunch of different users. There's someone who represents a district, somebody who's reviewing for grammar, somebody who's looking at it from a conservative lens, someone who's looking at it from a liberal lens, someone who's based.
Valliappa Lakshmanan [00:09:05]: And then like, so you basically have a bunch of different people on the panel with different perspectives going ahead and reviewing it. And at the end of that, all of these things, like, there's a second level of panel review. I won't bore you with all of that, but basically goes through once the review is done, right? And just, just wait a second while the review happens. The second review, okay, I hope there's always. Okay, there you go. There's always a problem with live demos. So go ahead. The second review happens, and then we say, okay, I'm happy with all of that.
Valliappa Lakshmanan [00:09:40]: And it goes to someone who goes ahead and summarizes this review, the secretary of the panel. And they basically say, now, this is how you need to go rewrite everything. I want you to. These are the goals that I want you to do. Here's some specific introductions. Briefly introduce. Consider stating this thing. Say why it's impossible.
Valliappa Lakshmanan [00:09:58]: Explain what transcendental numbers are. Explain the historical context, and then address all of these points. That's based on all of the reviews that have happened. It goes back to the writer who goes ahead and writes everything, and we get the final version. The app itself isn't as important as a few concepts that I'm going to talk about as we go through this talk. One of the key concepts is this, that you notice that I made a few changes as we went along. That is human feedback. You can basically go ahead and look at what feedback was provided.
Valliappa Lakshmanan [00:10:29]: And you can basically see, for example, that we had changes that basically anytime the human provides a feedback that there's an AI input, there's the specific output that happens. That output is here in the squaring the circle. You notice I changed it from the AI suggested a math writer and I changed that to be a historian. That has now been recorded similarly in the text. The AI suggested this. I basically made it go ahead and change it to be bullet points. And now that is the bullet pointed text. The idea is that you capture this feedback as you go along because you want to go back and be able to retrain it.
Valliappa Lakshmanan [00:11:17]: This is the entire thing that we're doing and this captures a lot of the things that you will be doing. When you build agentic systems. You want to be able to choose the agents that handle the task. You want to be able to have multiple steps in your workflow. Some of those steps involve multiple agents talking to each other. But then there is a lot of state that is being transferred from one step to the other. And underlying all of this is a whole set of human feedback that you need to capture. How do you do this In a very meeting all of the goals that I talked about.
Valliappa Lakshmanan [00:11:53]: First thing, some things need to be very composable. You need to take them and you need to be able to add them wherever, whenever you want to be able to use them independently in different workflows and different applications. And those are your agents. Every agent, everything in orange here is an agent. It's aligned to the specific job that you want to do. You give it the role. The role that you do is basically in a system prompt. It's configured and I'll show you how that's done.
Valliappa Lakshmanan [00:12:22]: Then there is stateful context that is moved from one agent to the next to the next. All of that has to be dynamic because there is data that is basically being created. And these prompts need to be done in the dynamic, need to pull in data from the state. It needs to be grounded in the business in terms of the tools that are available, in terms of the knowledge that's available into this workflow. And always, always the agents provide or produce structured outputs because those are the ones that you can actually evaluate and you can do things much more reliably. You do want to reuse a few things, reuse things that are battle tested. Don't reinvent the wheel. Things like continuous integration, the ways to deploy, go, deploy to lambda, go, deploy to cloud, run, deploy to something that is no.
Valliappa Lakshmanan [00:13:14]: And I'll talk about that standard off the shelf ways of doing monitoring, all of your software lifecycle stuff, standard off the shelf things. Then persistent systems, those are a special case you will need in your agentic workflows. Memory vector databases, caching, those are horizontal, those are not composable and, and so you want to go ahead and you want to reuse them and your agents themselves, build them in an LLM agnostic way, use pedantic AI, use llama index, use something that is very lightweight that basically allows you to quickly change from low latency LLMs to more accurate LLMs, from cloud LLMs to the things that you host locally, for example. And finally, like make sure that you make space for human expertise. The orchestration needs to be fully controllable. You cannot give that up to a framework because that is often where the domain knowledge comes in. You want to design for traceability and continuous learning all of that user interface that I showed you. There's careful design that needs to happen for every kind of agentic system you build because a you want to support traceability.
Valliappa Lakshmanan [00:14:29]: Every single thing needs to be recorded and recorded in a way that promotes continuous learning. And you need to build domain specific evaluation, domain specific corporates. So the system architecture then consists of a few things. Agents, multi agent architecture, the governance, the learning pipeline, and very importantly a way to improve things over time. Because I've never been in a situation where things are good enough when it starts, but we always get to a point where it's good enough to deploy. But there is a process that we have to go through to get there. The agents as I talked about, are independent executors. They function of the context and function of tools.
Valliappa Lakshmanan [00:15:10]: And you design your user interface to allow a user to complete the work. You have to be very careful that all of the information that the user uses to do the work, it's available in the agent's context because the agent has to learn from it. Use patterns like chain of thought, rag tool calling, et cetera, to enable the agents to do their work, but implement every agent independently, inputs, outputs, should basically contain the full state and write it as LLM agnostic code and make sure that your prompts themselves are separate from the code itself. So I tend to use Jinja, for example, there's a prompt service and I said there's a secretary system prompt and the secretary or the consolidation prompt. And that itself is a templated thing where each of these things is a Python object that we can actually go ahead and we can templatize. You want to separate that from the code because you want to maintain your configurability, you want to maintain your prompt optimization and your reinforcement learning. Second thing is your multi agent architecture. You saw like as I walked through the workflow, there was a copilot workflow that's human power.
Valliappa Lakshmanan [00:16:24]: As I walked through it, I was making choices, I was making changes. And that is going to be important for you to provide a way for the system to get better. Because if you start out with a fully autonomous workflow where you just take all of those steps and basically say what happens when you hit next every time, chances are that your performance is going to be very poor. Because a standard operating procedure is not what actually happens in practice. So you need to have a way to learn all the unwritten rules that exist and whether it is about finding what the next step is, whether there's a hard coded transition, or whether there are much more complex internal dynamics and market mechanisms to choose how to choose the next agent, you you need to have a user interface that allows you to capture those things in order for you to build an agentic system. The next thing that you want to think about is governance again, for deployment and monitoring, this is just another app. Deploy your serverless platform, user security, user monitoring tools. But when it comes to persistent primitives, you need standalone services.
Valliappa Lakshmanan [00:17:32]: For memory, for a vector database, for caching, for version management. You do want to have standalone services, but even there try to see if you can use the simple solution first and go. Use a persistent service only when you need it. For example, for memory. Lots of times for short term memory you can just accumulate the state and pass it around. But when it gets too large, then you need an agentic database. Or when you have a long term thing that's across sessions, then you need a persistent service. Same way for a vector database.
Valliappa Lakshmanan [00:18:03]: But in many cases you can just go ahead and do comparisons directly. You can do instead of building a RAG system, you can just do a search system. Take very large chunks, entire PDFs, for example, and put them into your rack. It's only when things become very large that you might want to think about graphical databases, hierarchical systems, etc. And at that point you should go out and buy Same thing for caching, same thing for vector management. The guardrails. I've always found that it's something that I need to add in tune as we go along. I treat guardrails as prompts in Jinja.
Valliappa Lakshmanan [00:18:43]: Something that can be configured that can basically be every input, for example, it's done. There's one trick that you want to be aware of is that guardrails can add latency whenever you apply a guardrail. Do it in the context of an async system where you're running the guardrail in parallel with the thing that it's Just guarding. The idea being that in the most cases the guardrail is going to say fine and so you haven't lost any latency. In the cases where the guardrail is going to basically throw an exception, then the second one is also going to get preempted in return for a little bit of extra work. You avoid losing time because of applying a gallery. Then the last thing is the learning or the next to last thing is the learning pipeline. As we talked about, you basically build the system in such a way that it's all based on prompts.
Valliappa Lakshmanan [00:19:41]: But then you take those scores, you take the criteria and you take the human feedback and then you train your ML models, you train your task assigners, you tune your LLMs to basically become much more smaller based on human feedback. Whether it's direct preference optimization or whether it's instruction tuning. You've got to have some post training phase to get from a POC to production. I've never had something that just works purely based on what worked in the PoC. So you want to basically build your systems are continuous learning and the human feedback. I just, I showed you that this idea of capturing every piece of human feedback so that you can go ahead and you can train it. Plan for your post training whether it's preference tuning, whether it's adapter tuning and make sure that everything that you show the user in your copilot workflow is all editable so that you can capture that human feedback. Your long term memory is going to be all over and, and so you want to capture this long term memory, figure out how long you're going to maintain it.
Valliappa Lakshmanan [00:20:47]: But you've got to maintain that control on long term memory even if you use a persistent service. And finally build a data program. You need to augment your learning pipeline with a data program because often you will have insufficient number of human corrections. Over time people are going to give you less and less feedback. As the system gets better and better you run into what is called automation fatigue. It just works so well that people assume that it is correct all the time. So you have to actually spend more time getting the right kind of feedback that you need. Bottom line, end of all of this is that you start with something that's extremely modular, that's extremely reusable, you maintain full control of orchestration, but you use standard protocols, you use standard tools wherever they're already ready.
Valliappa Lakshmanan [00:21:37]: And you basically. And the good thing about all of this is that because you're using standard as like software development practices, you can piggyback on Existing approvals in terms of IAM, in terms of permissions, in terms of infrastructure, etc. Because you don't have to go get approvals for a new set of tools. So these many of the patterns that I talked about here, they're all in this book. You can go to the GitHub repo. There's example implementations of every one of the patterns. So if you go into the examples in the repo, every one of the patterns has an example of, for example, reverse neutralization. It solves a very specific problem of in your training data set.
Valliappa Lakshmanan [00:22:20]: What if you only have good examples? You don't have the example that you're starting out with. How do you train? How do you make your LLM better if you have that kind of a situation? So there's about six several problems that we're trying that encounter over and over again that not just us, but people that I've talked to, expert engineers over time. And so we've captured all of these in this book. And you can get the. That is the link to the repo. And if you look at the repo, there's a link to the O'Reilly book as well. And with that I will take questions.
Demetrios [00:22:57]: Nice. I will say that your Machine Learning Design Patterns book, I guess this is the. That's the prequel to this book. It still is on my bookshelf and I reference it to remember, like what was one hot encoding again? Exactly. And that type of thing is so good and so useful. So you're a veteran at this.
Valliappa Lakshmanan [00:23:22]: Thanks. Thanks.
Demetrios [00:23:24]: Now, questions.
Valliappa Lakshmanan [00:23:26]: Yes.
Demetrios [00:23:28]: I've got one just about. Like, this is a topic generator. And I was thinking about it because today I was listening to a podcast and it was saying how useful it is when you can plug in LLMs to your different data stores that are like your personal data stores. So for example, if I could plug it into my email, or if I could plug it into whatever and take context from there and then add it to the book. And I was thinking, oh, well, how would that work here? Maybe it's a memoir that you want to write and you say, I want to write the memoir of when I was building XYZ company here. Plug into my email as something, plug into my Slack as a data store, and then you can use that for the writers too. I imagine the design pattern would be different when you have to interface with things as an agent.
Valliappa Lakshmanan [00:24:22]: Right, right. That's a great point. Right. What you're talking about is personalization. You're talking about building a common agentic workflow but every agent essentially uses data that is personal to the person using it, and it's able to build it. One of the nice things is in a lot of enterprise situations is that you're dealing with enterprise data that is consistent in a somewhat standardized form. Assuming the enterprise has gone to the effort of building, building a data lake, building, building an ontology, et cetera. If they haven't, then you have a problem.
Valliappa Lakshmanan [00:24:58]: But when you're building consumer apps, you don't have that luxury. And when you don't have that luxury, you need to design for that. And a lot of the work that gets involved in having to deal with the heterogeneous data that you're going to get. When you say load your memoir, no two people are going to write their memoir in the exact same way. When you say give me all of your shopping lists, no two people are going to organize the shopping list in exactly the same way. This is where the UX design becomes extremely important. Because even if they don't design it in the same way, by walking them through a UX design, you can ensure that you have a good semantic understanding of what they are actually building and what they're actually talking about. So the more heterogeneous your environment is, the much more important the UX design of a copilot is for you to get to the autonomy stage.
Demetrios [00:25:54]: So I also am wondering how you feel about where you put that like in the enterprise context. Are you saying that you should throw all the roles and access onto a different system like the data lake, or should there also be some kind of roles and access syncing with the agent?
Valliappa Lakshmanan [00:26:20]: Right. I oftentimes what you will find is that the agents that you build map really closely to people in that organization. Right. There would be, there will be editors, there will be writers, there will be, you know, you know, loan approvers. There would be, you know, there would, there are people in those roles today and you can often give the system prompt of the agent will be for them to act as those people with those very similar incentives. And when you design these systems, then you, those agents can then get the same kind of roles that you have given people playing those roles into the same databases. Having said that, you have to be really careful that you don't like. You know, we had a lot of talks today about the depth of workflows and how long they take.
Valliappa Lakshmanan [00:27:14]: What I've often found is that agentic systems work when you do a plan and you do an execute, but the moment you take that execution and you do A reframe the plan, they start to go haywire. So you know, at least today the state of the systems is that you go one deep. The second level of depth is. It's a recipe for disaster. So I, I recommend that people design systems to, to go one deep and no deeper.
Demetrios [00:27:41]: Yeah, I'm sure that's going to improve.
Valliappa Lakshmanan [00:27:43]: Over time, but this is where we are today.
Demetrios [00:27:45]: It's the emoji. This is fine meme basically when you go too deep. Now, Will has a question coming through in the chat about how you handle conflicts when multiple agents write to shared memory or act on the same step in parallel.
Valliappa Lakshmanan [00:28:06]: Great question. Right. So let me take those two things apart. How do we handle the, the conflict and when both systems write to memory. Memory to me is an append. Appendable thing. Memory is not like a transactional database. Memory is much more append log.
Valliappa Lakshmanan [00:28:25]: So the idea is that every agent gets to write into memory and the user, the person that's getting the data from memory, it is their job to resolve the conflict in the way that makes most sense to them. Sometimes one agent is more trusted than another agent and the person that wrote first or second or however they, that it doesn't really matter. Right. And other agents might say I want this in this append only log. I'll take the latest in case of the conflict. But idea is that the conflict resolution is the job of the user of the memory, not of the writer of the memory. So it's not. So I treat memory typically as an append only thing.
Valliappa Lakshmanan [00:29:07]: The other question that we had was the task.
Demetrios [00:29:11]: Yeah. Multiple tasks.
Valliappa Lakshmanan [00:29:13]: Like multiple tasks in parallel. Right. What happens if you, if you have agents that are doing multiple tasks in parallel?
Demetrios [00:29:20]: They're, they're doing the same step. So they're acting on the same step in parallel.
Valliappa Lakshmanan [00:29:24]: They're acting on the same step, but they're. No. And this is very, very, extremely common in an agentic system because you will have a group of agents that need to. Basically you have a fan out situation. You have a lot of them doing things in parallel. That is perfect, perfectly fine. But then whenever you do that, you will always like if you look at my example in the, in the panel thing that I had. Okay, so you basically had a panel.
Valliappa Lakshmanan [00:29:51]: Right. But then you. What, what I'm not showing here is that we also have this whole idea of the summer. This. Okay, yeah, there it is, there is a, there's somebody whose job it is to take the result and essentially summarize it and get it back. Right. That this is the conflict resolution step. So, yes, all of these guys are acting in parallel.
Valliappa Lakshmanan [00:30:10]: And that is okay because we introduce a conflict resolution step where all of this, all of, even if the liberal parent and the conservative parent are at odds with each other, that conflict gets resolved by the secretary when they come out. And this is also another thing that is important in agentic design, is that if you let these agents talk to each other, especially agents with completely opposed system prompts, they escalate very rapidly and pretty soon they're like walking out of the room in essence. Right. So it's really hard to control the behavior of agents that are working in parallel, especially with completely different outcome metrics. But that's unfortunately very common. You have people whose role is, is to balance each other out. And so they have conflicting metrics. And you will have that in any kind of agentic workflow.
Valliappa Lakshmanan [00:31:05]: So you have to have this escalation point and an agent that resolves them. This is the way I tend to do it. I tend to basically have one layer and a resolution and I'm done. Rather than allowing them to talk to each other and causing enough escalation.
Demetrios [00:31:20]: Yeah, and just burning credits too. It's like, stop. You just brought my bill.
Valliappa Lakshmanan [00:31:29]: Yeah, yeah.
Demetrios [00:31:30]: Well, like, this is great. One last question that I have for you that you opened my mind back, like in early chat GBT days when I was thinking about use cases and you told me this one where you said, look, now with E commerce, think about anything that can be personalized is going to be personalized to the maximum because we have the ability to use unstructured data and generate unstructured data to personalize. And so when it comes to that shopping cart, or when it comes to the abandoned shopping cart email that you get, or when it comes to an email that is generated, we can generate the right kind of photos that we know you like. And you said this and I, I was like, wow, that is really cool. It makes me want to get into E commerce just to personalize. But now we're two years later, three years later. What are you excited about as far as use cases go?
Valliappa Lakshmanan [00:32:39]: What I'm excited about, what I'm seeing often is business expansion. Right. Again, on the enterprise side, we're seeing that we have so many capabilities and things that used to be open. I think of it as like travel. It used to be that, no, you wanted to, you wanted to go to a safari in Africa. It was a very expensive thing and only a few people could do it. And travel got democratized and many of us can go amazing places because the cost structure has changed. And I'm seeing that AI is changing the cost structure in so many businesses that things that used to be bespoke and available only to very few people and very few businesses and very no is now moving quote unquote down market, but basically getting democratized.
Valliappa Lakshmanan [00:33:32]: And again, I don't want to name names, but I see people building in these spaces where they're taking these very luxury experiences, things that used to work only if you had $5,000 to spend on something that are now possible to give around at like $50, like 1 in 100 reduction in cost. And that's basically, I mean, what's really exciting to me. I know you wanted to. You want a technical answer when I'm giving you a business one. But what's exciting to me is this complete expansion of so many different markets and experiences. I think we're going, we're, we're all in for a treat. I think because of how much more we'll all be able to do.
Demetrios [00:34:19]: Yeah, it's funny you say that because my friend Dimitri talks about how now his team that he works with who's a bunch of machine learning engineers and data scientists, et cetera, et cetera, they, when they give an mvp, they don't give a streamlit app anymore. What they give is a react app that they kind of vibe coded and it got good enough to get a prototype. So it's almost like the ability for you to go further and do more in the business.
Valliappa Lakshmanan [00:34:52]: And yeah, I'm seeing a lot of product managers now. They're skipping the Figma stage altogether and they're building able to go build a simple app. It is all hard coded responses but they take it to users and they show them a real app rather than a bunch of Figma screenshots. It's mind blowing right now. There's a lot of capabilities that are getting unlocked more users and much greater velocity and variety.
Demetrios [00:35:23]: Excellent Lak. I just realized we're bit over time so I'm gonna kick you off the stage now. Thank you very much.
Valliappa Lakshmanan [00:35:29]: Thanks.
