MLOps Community
+00:00 GMT
Sign in or Join the community to continue

Enterprise AI Adoption Challenges

Posted Jul 29, 2025 | Views 12
# AI Adoption
# Toqan
# Prosus Group
Share

speakers

avatar
Paul van der Boor
Senior Director Data Science @ Prosus Group

Paul van der Boor is a Senior Director of Data Science at Prosus and a member of its internal AI group.

+ Read More
avatar
Sean Kenny
Senior Product Manager @ Prosus Group

Part of the Prosus AI team, focused on launching and growing cutting edge AI products.

+ Read More
avatar
Demetrios Brinkmann
Chief Happiness Engineer @ MLOps Community

At the moment Demetrios is immersing himself in Machine Learning by interviewing experts from around the world in the weekly MLOps.community meetups. Demetrios is constantly learning and engaging in new activities to get uncomfortable and learn from his mistakes. He tries to bring creativity into every aspect of his life, whether that be analyzing the best paths forward, overcoming obstacles, or building lego houses with his daughter.

+ Read More

SUMMARY

Building AI Agents that work is no small feat.

In Agents in Production [Podcast Limited Series] - Episode Six, Paul van der Boor and Sean Kenny share how they scaled AI across 100+ companies with Toqan—a tool born from a Slack experiment and grown into a powerful productivity platform. From driving adoption and building super users to envisioning AI employees of the future, this conversation cuts through the hype and gets into what it really takes to make AI work in the enterprise.

+ Read More

TRANSCRIPT

Paul van der Boor [00:00:00]: So let me start with the metrics on the productivity side.

Sean Kenny [00:00:03]: You want to make sure that you keep your power users. These are the guys who are going to really push you to the next level. Users right now scream of oh, we want Google Drive. What does that mean? They're like, I don't know, but I want it. We're now in the loop of can we get this to work at all? Can we make it something that users can easily set up? And then can we make it so intuitive that it's going to be a no brainer?

Demetrios [00:00:28]: We talk with Sean and Paul, both employees, process. Paul is the VP of AI and Sean is the product lead for Toqan. I myself am Demetrios, the host of the mlops community podcast that you're listening to right now. Let's get into this conversation. This episode I wanted to view from a product lens and Token has been built and it's had a lot of iterations. It's also had a lot of learnings specifically around how to get users to use AI products.

Paul van der Boor [00:01:00]: Yes. So just to kind of sketch the environment, the group that we work at, Prosis, right? So it's a big global Group with about 100 companies part of it, some very large like OLEX and Swiggy and iFood and DeliverHero and others, and some smaller, but at the center here, at the AI team that we have built here in Amsterdam. Our mission is to help the entire group adopt AI everywhere. It makes sense. And so when in the early days of these large language models, we were partnering with many of the big labs out there, before ChatGPT and so on, to kind of understand where this was going. And it was very evident, at least to us, that this technology was on a acceleration path. And so in the early days we basically started to make these language models available through Slack, so for people to kind of experiment with them. And eventually that grew out to be Token and that's now, today it's basically a productivity tool and even starting to become sort of a platform for folks to build on top of all the language models that we continuously expose through Token, not just in Slack, but also in a web interface if people want to chat to it, and so on.

Paul van der Boor [00:02:10]: And now also through APIs, to build on top of all of these large language models, generative models for images and other agentic systems that are sort of powering Token and many of the use cases across the group.

Demetrios [00:02:26]: What are some product metrics you've put around the usage of Token?

Paul van der Boor [00:02:29]: Yeah, there are basically two buckets. So the first is around productivity and adoption of genai across the workforce. So we believe that it's really important for everybody to play with actually for them to understand how these models work, what they can and can't do. And the only way to do that is we found is for people to try it. Because the technology changes so fast. You can't just design a course and put it on our internal learning portal and say learn prompting, it's going to be done by the time these models you should use because that course has a half life of three weeks. So that's the first bucket. So there's a bunch of metrics which I can highlight there.

Paul van der Boor [00:03:06]: And the second is around adoption for actual consumer facing or internal use cases across the group. So we are a very large group, 2 billion users that we serve and different parts of the world there's a ton, you know, we've got thousands of ML models and use cases that could be powered by some of the same systems that we're building for token, that are agentic, that use the latest large language models, the tool calling and so on. So let me start with the metrics on the productivity side there we primarily focused on number of users that tried token. So how many people have actually try to. We're trying to get to basically more than 80% per company of people that have tried it because there's a big hurdle. Just try asking to a question related to your work. You know, how do I summarize this, how do I translate that so people start to do that. The second is actual frequency of usage.

Paul van der Boor [00:04:04]: And so there's a really interesting let's say evolution we've seen happening is how, how people go from asking it casually once a week, once a month a question to becoming super users. And I can share some stats around making sure that we see growth of super users. That's basically the category that asks is more than five questions a day.

Sean Kenny [00:04:28]: And.

Paul van der Boor [00:04:28]: Then the third is total number of questions asked across the group. So those are the kind of three metrics we focused on for a long time. And then on the second bucket which is more on the adoption of use cases for not productivity but everything else. Right. So where you put things as part, you put the token engine as part of your workflows through an API. And then we measured API calls and also number of individual use cases that we're seeing using the technology underneath the token engine as we call.

Demetrios [00:05:02]: Seems like this five questions a day is like your North Star metric because if you can get someone above that then they are Now a power user and you want as many people to become a power user as possible. Is it like that? Like it reminds me of I heard a stat back in the day with Facebook, they would try and get someone who's onboarded to Facebook at least seven friends. And once they had seven friends they were hooked.

Paul van der Boor [00:05:27]: Yeah, I definitely think that there is that sort of turning point where now you've become a regular user of these tools. Right. And by the way, just to be clear, like we encourage people to use all the tools to kind of is the in house one which uses, you know, we swap in and out models all the time so people can experiment with them and we learn which ones works better and not. But we also encourage them to use all the other tools. Right. In fact, if you look at my browser, I've got, you know, basically one browser with all my AI assistants open, right? That includes Claude, it includes ChatGPT, it includes Grok with a K and it includes of course Token, it includes Gemini, and now I've got Devin and a few others that I'm testing with. And you do see. And so the five questions a day for tocan, it wasn't as scientific as maybe the seven friends on Facebook, but once you become a daily user, so your first intuition is, hey, should I ask this question to Toqan? That is the kind of thing you want to achieve.

Paul van der Boor [00:06:30]: And then it doesn't matter if it's token or cursor or we want people to become sort of natural users of these tools and help them do their work better, faster and more independently.

Demetrios [00:06:41]: How are you measuring that? Because I'm sure there's a lot of stuff that people are using that are outside of token, but they're still using it and you can't really track it or can you?

Paul van der Boor [00:06:53]: Yeah, we can. I mean that's for all teams. So obviously for our team, we track all sorts of things like how many PRs are being, you know, generated with the help of AI, how many PRs are accepted or commented or involved some kind of AI. Right. And the first days it was well, you know, how many people are actually using a cursor or are using a certain set of tools. How many copilot users do we have actively? So there are things in my team or our team that we can definitely track and do track across the group. We don't have that visibility everywhere, but we've done sort of experiments where we give part of a team access to one of these tools. Let's say in the early days of Copilot we had done that and then the other team not.

Paul van der Boor [00:07:38]: And then we do surveys and see how often do they use it, what's the difference, what is the feedback? And just for us to understand, because there's a lot of independent research that also gets done that says engineers have an average productivity increase of 20%, but that once you peel the layers a little bit further, it's like, which types of engineers for what types of tasks. Is that 20% overall or is it just on those tasks? It's much harder to really pin down what the value is. And we're pretty convinced that the value is real and that it's in the double digits. We care less about whether it's 21 or 23 or 41 or 42 or for whatever. We just want to make sure we make it easy for people to adopt if they want to, and then also have a good understanding of where we want to encourage people to adopt certain tools, which types of. We've got big engineering teams, thousands of people across the group that are building software, we've got big marketing and sales teams, we've got basically pockets, customer support and so on, where it's natural for us to make sure that those people can do their work better, faster and more independently. And we want to understand that. And that's why we measure these things.

Paul van der Boor [00:08:52]: And once we see that the value is real, we start to sort of encourage and push for adoption more actively.

Demetrios [00:08:58]: Is that what you talk about when you say like the AI workforce?

Paul van der Boor [00:09:03]: Yeah. So AI workforce indeed has a big productivity element to it. So how do we make sure people become more productive? But like I said, it's really hard to, to kind of quantify this second value of people using AI. But it's so important in a tech company like ours is they need to have that intuition. Right. So if the lawyers and the folks in finance and HR and marketing, all the different functions don't understand AI, well, it's much harder to do all those other forward thinking products and features and actually trying to think about how do we want to build, you know, a better delivery experience, a better online shopping experience, online travel booking experience. So the intuition having that is it's almost like a culture change. Right.

Paul van der Boor [00:09:52]: You need to have that on top of the productivity benefits, which are, you know, an obvious value as well.

Demetrios [00:10:00]: All right, so let's bring on Sean now to talk about how he's been encouraging adoption for Toqan because he's been the product lead and I know he thinks a lot about these trade offs and what to build for the power users versus what to build for the general public or the general employee.

Sean Kenny [00:10:19]: I'm Sean. I'm the product lead at Tocan at Process, part of the Process AI team. And I take my coffee and overdose. And we actually got a barista in the office. And I started to only take machine coffee because I find that it takes the joy and the magic out of the weekend coffee.

Demetrios [00:10:36]: Ooh, I like it. What were you saying about the stigma?

Sean Kenny [00:10:41]: I think when Genai started or when it started coming out and you used it in order to complete a task, you were always trying to hide it because it always felt like the cheap way out.

Demetrios [00:10:51]: That's so true.

Sean Kenny [00:10:52]: And I think that's still there and anywhere, right? I mean, it's, oh, this is a Devon pull request. It's not my pull request. Or this is something that I've written with Tolkan. And you always feel like this looks AI generated, which can be perfectly fine content wise, but there's a little bit of not even from the other side, but at least in the back of my mind, and many tasks of like, ooh. And then you're like, no, no, this is what we built it for. It should do this.

Demetrios [00:11:17]: It's one of those things that I contemplate a bunch on. The doorman paradox, as I've heard it called, where a hotel got rid of all its doormen, and then it thought, wow, we're saving money. This is great. But later it found out that people started loitering in the front. Also, the guests weren't having the best experience because the first thing that they were hit with when they entered the building was they were lost. And they realized that a doorman does so many different things than just opening a door for someone.

Sean Kenny [00:11:48]: You mean that in terms of customer touch points or in general?

Demetrios [00:11:51]: Well, I think about it with AI, where we are so quick to try and use it, but we don't think about the secondary and third order effects of when we're using it. And one of them is this. In my mind, I draw a parallel to, oh, people look at it and they think, hmm, this is AI generated. And maybe you lose trust with someone. Or it's not the same kind of outcome that you would want, especially when you see so many people toting, oh, man, I got my content marketing flow perfect. Because now AI does it all and it's this incredible pipeline, blah, blah, blah. But then you look at the content that comes out and you go, this is absolutely shit. You're just ruining your brand by putting this out There it might be great that you can automate it all, but nobody's reading what you're actually creating.

Sean Kenny [00:12:46]: Yeah. And I think it's very much a question of when and intent wise how you want to use it. I think deep research is an interesting example. If research is one of the steps in the job that you're trying to complete and you use deep research for that, it might get much harder to complete the steps down the line because you don't actually know all the considerations that have gone in. And I think for many tasks it's or many users actually it's how you use it. And many users generate something without reading it and then do something with it. It's like that's not how you're supposed to use it. It's not there to just generate blindly something.

Sean Kenny [00:13:24]: And so one of the things that we've been trying to teach users is to really use it like a buddy. Like, hey, if I were to go do something, how would you advise me go about that? Right. And so more on a discussion level, of course you can have the execution afterwards, but you should be very confident that at least it's going the right direction and then you can look at okay, quality wise, of course you still need to read it. You need to make sure that what is there is actually useful.

Demetrios [00:13:46]: Let's talk about the evolution of token because now you're starting to see these clearly trotted paths of how people are using it. Like you just said, use it as a buddy or use it for data analysts, use it for X. It wasn't always like that. And I think you all started working on it two years ago. It has been a long journey. What can you tell me about like just summarize the evolution. Yeah, yeah.

Sean Kenny [00:14:14]: So let's start with probably the question of why our app exists in the first place and this. I wasn't part of the team at the time, but the team created a Slack app before, I think GPT3 came out because the team started seeing these models come out and be like, well, they look really interesting but it's still quite hard to find moments where we can use these in business. What can we do? Well, the easiest thing might be to just create an app where anybody can interact with them and we'll see what people do. And from there we then figure out what we do next. And so we started creating a Slack app. At the time this was again far from agentic. Nothing like that was around. So we had some basic intent detection that routed I think for four or five different Kinds of work.

Sean Kenny [00:15:01]: And then that was spread through the portfolio companies that we have at least some the ones who wanted adopted. It was very organic, but it grew a lot faster than we thought it would. And at that point we decided, well, okay, let's actually turn this into a product. And we spent a lot of time on critical tools that are needed in order to complete tasks. They're now pretty commonplace, but the ability to work with any kind of file, generate files and data analysis came. So, so these things we worked on quite early still on the intent detection model. And then we just kind of tried to bring that more into okay, well how do we take this experimental app and make it something that people can actually complete work with. And that we then converted into an agent that seemed to quite drastically outperform even at the time the intent detection.

Sean Kenny [00:15:49]: And then from there we've just kind of. What do you mean by the system.

Demetrios [00:15:52]: Changed from intent detection to an agent?

Sean Kenny [00:15:55]: So we had a system where when a user asked a question, we would route the request down a, almost like a workflow. And so we would have to figure out what intent the user had. And we were quite limited in the amount of the tents we could add. And we would if we wanted to add a new intent, do some new fine tuning and training of a model to do the routing. And so of course with an agent you just add, quote unquote, just. But you add tools and it's, it's easier to manage and expand over time.

Demetrios [00:16:21]: Up until I think like six months ago, you all had, you all just started doing a ton of updates. Can you talk to me about the last six months and what you've been doing?

Sean Kenny [00:16:33]: So right now I'm the product lead for Tolkan. But when I started I actually joined kind of as an adoption advisor. So my role was to help the companies help teams and companies and the employees there to help actually use the product. So we, we found out quite early that there is a set of companies in the portfolio who were adopting this tooling a lot faster than others. So we knew that there was potential and room there.

Demetrios [00:16:59]: Any idea why that is?

Sean Kenny [00:17:00]: Yeah, yeah. So this comes down to quite a while or not that much. But the nuances are difficult to figure out. But much of it was almost top down culture. Like it's a. Okay, let's give this a try. Let's see where it goes. It's very much a mindset difference of being stubborn enough to make it work, wanting it to work and being excited by all the opportunities that come from it and so it's very much, at least on a larger scale adoption level, it's very much a cultural thing of how does your company view this kind of technology? And a lot of it is driven by leadership and the way it's encouraged because it's very easy to say, oh yeah, AI is important, but then there's no policy, there's no guidelines, there's no instructions around that.

Sean Kenny [00:17:39]: And that makes users not feel very encouraged, they're just more uncertain. And so building certainty around, hey, we don't just want you, but we expect you to spend time on these tools. And in the beginning it's going to cost time, and that's fine, but it's your responsibility to learn, and here's the resources that you have in order to learn it. But at the end, it's on you in order to figure this out. And you should be interested in this, not just because it's good for us as a company, but also because it's good for you. It's a skill that will only become more and more important. And so the sooner we start, the better this is going to get.

Demetrios [00:18:11]: The cultural aspect, I like that.

Sean Kenny [00:18:13]: And there's a certain set of flexibility and nimbleness in there because. And it also is part of the culture, but it's not just the adoption and go and try it. But this inevitably changes how you work. And that's something everybody says, but in reality, oftentimes, especially these kind of new technologies conflict quite heavily with existing processes. You do something in a certain way, and it's not always easy to just plug in, plug out, but you might have to rethink everything that you're doing, which some teams and some people are more comfortable with than others. And generally speaking, the ones who are more willing to experiment to just see, hey, does this actually work? Does it help? And then are quick to say yes or no. They're the ones who benefit from this a lot. And then the secondary effect is if you have a subset of those teams available, then you want to look at sharing.

Sean Kenny [00:19:03]: And so that's the second part where it's quite painful for individuals to do a lot of this discovery because lots of it is still quite technical and is very difficult and most of it isn't particularly user friendly. Actually, agents are kind of the opposite of user friendly, at least in the earliest iterations. And so building a community of sharing is super important because that means that if one user discovers something now, the next 10 don't have to make the same mistakes. And the difficulty, it goes back to stigma is asking questions, asking for help on things like this where you feel like everybody is 10 miles ahead feels so stupid. But oftentimes everybody's struggling with exactly the same thing. Right. And so yeah, you see these posts, oh, I've done all my marketing, etc. But how does it really work? Right.

Sean Kenny [00:19:49]: How can I do it for my work? And so having a kind of an open culture to share and and learn together.

Demetrios [00:19:55]: I've heard people tell me that that are in this field. They say I don't know if I should be learning rag or agents or tool calling. I feel so behind and in my day to day I'm working on a recommender system but I am not sure if I can plug in AI in some way. And so that FOMO is really real.

Sean Kenny [00:20:19]: The culture is very much an enabler and an accelerant. But I think, and that's a lot of what I spend most of my time especially in the beginning on is the actual education part because it's very easy to tell people oh, go and try and use it. But it's very, very hard for most people to do that. Right. It's a non obvious. You don't even know how to start. At some point I gave a talk on how to build product using or how to build AI products for users that actually work and are useful. And one of the things that come up so often and many companies do this and part of it is early experimentation.

Sean Kenny [00:20:54]: But you get a chatbot and there's a chat input field and it says this is Chatbot X, ask it anything. And as a user you're like anything, right. So you ask the first thing that comes to mind. And usually it's not something that's helpful to you, it's just the first thing that comes to mind. And even if it's helpful to you, there's no guarantee that the agent or whatever chatbot actually is able to do it. But you have no other way of figuring out other than throwing things at it. And then it's not going to like you'll. It'll fail and you'll be like well ask anything.

Sean Kenny [00:21:25]: Clearly that's not the case. So I'm just going to ask nothing because doesn't work. Yeah.

Demetrios [00:21:29]: And then you lose the user. And that user experience is not ideal. I've been banging this drum of like chat is not the ideal way to interact with AI and I wish there were easier ways. I know that you have thought about building products like you said, you gave a talk on it. You also saw Some user adoption challenges. And so on one hand we've talked at length on how can you build more support for people who are trying this so that they can have that stubbornness to break through and realize the value of it. Can you talk to me a bit about some of the things that you did to try and help folks get through that filter?

Sean Kenny [00:22:14]: Yeah. So I think in terms of interface, we've done a few things like follow up questions. I think lots of it is kind of experiments around chat interactions. The fact of the matter is that the chat is still kind of predominant also in the way people think about it. And so changing it now again is going to make it even harder for many people to understand it because many are just getting to the chat part. Right. And so now if we change again, we'll break something else. I think for us it's been a pretty tight balance between we want to move fast with the product.

Sean Kenny [00:22:47]: Everything is happening all the time. We have a small team, so we need to be able to build new capabilities, enable our power users. And so we've actually spent a lot of time in hands on support. I think in the beginning I did a live webinar every two weeks with users and we did lots of hackathons and workshops and training sessions. We have the benefit that in reality we have a pretty contained small audience. Right. I think we, we look at about 30,000 ish employees. And so I'm not, not so small.

Sean Kenny [00:23:18]: No, no. But I'm not too worried of like, oh, we will have a potential of, I don't know, 20 million people out there that we need to educate. But it's something that's much more feasible to do. So we organized big events, like we had a token day and we organized training sessions, we organize cross company team events. So a lot of it has been kind of hands on because it's a very easy thing to keep talking about and there's parts that we can build in the platform. But what I really wanted to get over is the initial reluctance or fear of using something. And some of these webinars were just, I'll give you one that actually turned out to be really funny or funny in a sense that it worked because it actually started as an accident. But we had a case prepared and we joined with a company that just got access to Toqan or I thought they did.

Sean Kenny [00:24:04]: And so we joined this webinar. I think There were about 200 people and it was a hands on case. Right. So I'm like, okay, here's the case. This is how it works. We introduced a bit of the product before, but like, just go try. You have an hour. Our hypothesis was nobody takes time to experiment and try.

Sean Kenny [00:24:21]: So if we get them to try in a dedicated, safe environment, then we get the first kind of fear response out of the way and now people are much more comfortable to start. Okay. But in that session particularly, it turned out that some part of the onboarding had gone wrong and so they didn't have access. And what I then decided to do in a kind of, in a mild moment of panic of having 200 people sitting in a webinar, being like, okay, what are we going to do? I just did the case live myself. And it was both one of the best received webinars as well as one of the highest user uptakes afterwards. Because the way I did it was just kind of walk through, okay, now I'm thinking this, this is how I'm going to use it. And so just this kind of live interaction, almost like an example, even if the case isn't super relevant for the user, turned out to be incredibly appealing. And so just going back to the product question, we've done some experimentation on the product, and part of why we've enabled our app in the web as well, or why we've done a big push for the web is to make it more usable and user friendly as well.

Sean Kenny [00:25:20]: It can be a bit tricky, but a lot of the education and a lot of the early user guidance has come from actual training and guidance and materials.

Demetrios [00:25:29]: It's almost like you have one aspect which is you're trying to make the technology and the user interface as intuitive as possible, but then on the other side, you're evangelizing as hard as possible to have folks jump on the train and get involved. And I know there's this trade off between building for the power users versus building for everyone else and making it as intuitive as possible in the lowest common denominator versus adding all the features that the people that are ahead of the pack want. How have you looked at that? Because it seems like it's going to be hard to have wide adoption if you're constantly chasing and adding features for the people at the front of the pack.

Sean Kenny [00:26:20]: Yeah, it's a, it's a difficult balance because you want to make sure that you keep your power users. They are your most important users in, I mean, in a, in a company political way, in terms of metrics and impact, but more in a product way, because these are the guys who are going to really push you to the next level. And they're the ones who will figure out what could work, how that could look. And they're the ones who really give you good feedback and they will educate all the other users if they're able and willing to share. And so you really want to keep those users around and if you don't build the next thing for them, you risk them going away. So that's, that's kind of always been forefront of mind and then it's been what's the minimum product guidelines and usability that we need in order to get other users on board? And we've under an overshot and so we course correct as we get user feedback. But I think the balance has been mostly what's the minimum to get users started while we build the next big thing? And the cycle for building the next big thing usually is can we make it work at all? Is that something that we can make work? And now it's somewhat commonplace, but at some point in the past we built a, a text to SQL agent as a verticalization on the platform and the first question wasn't, oh, how do we make this useful? How do we make it intuitive? It's still neither of those. The question at that point in time was is this feasible at all? Right, can we make this work? Okay, then we made it work, but we did it by ourselves building it.

Sean Kenny [00:27:52]: So now we're like, well, we are not data experts, the power users, the data engineers, the analysts or the end user and commercial that wants data. They're the experts in their domain. So now how do we build this platform in a way that they can use it instead of us setting up everything? So then that's the next step. And now at some point down the line you would look at, well, okay, how do we make this super easy to use and super intuitive. But the first question was just are we able to get the users to set up and answer any questions by themselves? And so I think that's kind of always the push down. Like it's always a balance. And I think a lot of the reasons why many AI products are not the most intuitive is because it's a very tough challenge to crack and it comes at the cost of other features.

Demetrios [00:28:35]: It's almost like you have to optimize for the early adopters right now.

Sean Kenny [00:28:39]: Yeah, you do. And in a way you put a lot of requirements on the user. And so there is the balance of where do you do that and where not. But it's also in the hope and expectation that the users do spend time on learning the system.

Demetrios [00:28:58]: Yeah, because one of the challenges I heard was that folks would give up after trying to do something or asking Toqan to do something for it, and then it wouldn't do it the first time, they would try again, but you don't really know. Am I not able to ask this correctly or is this just not possible?

Sean Kenny [00:29:21]: Yeah, yeah. So that's super difficult to navigate as a user. And there's a couple of things we've done around that for user experience. One is just make the system much more resilient itself. So when we built the first data analysis, not the SQL connection, but just data network code execution functionality in Token, it would fail almost all the time because it couldn't load the file that you shared, or it couldn't handle the formatting, or it would timeout because it installed some absurd library. And you as a user would never know this is something that you can't use. I mean, you could theoretically solve in a prompt or, oh, it's maybe because.

Demetrios [00:29:59]: I didn't ask it the right way.

Sean Kenny [00:30:01]: Right. Or like, oh, maybe this is not even enabled. Right. I've seen a video of it, but maybe that's something different. Maybe I misunderstood. It's very easy to.

Demetrios [00:30:08]: I'm not using the newest version of Token, but.

Sean Kenny [00:30:11]: Right. And so as a user perspective, these things can be hard to interpret. So what we made sure is that your first touch point is a lot better. And so that by itself actually helps a lot. And just in terms of how the system responds, both on being able to succeed, but also when it fails, like, hey, I'm sorry I couldn't do this, but here is a set of things that I can do.

Demetrios [00:30:35]: So your first touch point being like, instead of ask me anything, it is ask me this, because I can do it.

Sean Kenny [00:30:43]: Yeah. So that's the second point. And that's more in the onboarding of users being like, okay, well, here's a checklist of things we want you to try because we know that these are generally valuable as capabilities. And so some of these might be really interesting for you, but it gives you a good ground of some successes. And that's more on the onboarding side of, let's say here's a list of 10 things and we can go over how we communicate with users in a second as well. But summarize an article and it's like, oh, well, it can do that. Okay, that's one win. And even if the article isn't relevant now, you as a user have one win.

Sean Kenny [00:31:18]: It's not that you ask the first question, it fails. You try again, it fails. And you're like, well, I give up. Right?

Demetrios [00:31:23]: Yes.

Sean Kenny [00:31:24]: So it's that. But then on the product side, really, or more on the technical side, making it much more reliable, catching a lot of the edge cases that could go wrong in order to help the user, especially when they're actually starting their own work, not immediately get frustrated.

Demetrios [00:31:39]: Now, as you're looking forward, I know that you have thought a lot about what to do next. What's the general consensus on how you're going to continue the progress?

Sean Kenny [00:31:51]: Yeah, so right now we are very much a. We're very much in the AI assistant category. A chat interface that allows you to kind of have conversations with AI, use tools and complete tasks. What we're looking at over the next five, six months is a step change to our current paradigm as AI workforce. So we can talk a bit about what that means. But the fundamental question we've started to ask ourselves, and there's hypotheses baked into this, but as a platform in process, because we are a horizontal solution, how do we allow every employee in process to build AI employees that can complete their work? My fundamental belief, or so there is an observation, and these terms are flying around a lot, AI employee, especially with current technology, I think it's mostly a marketing term. It's something that you use to sell it to a stakeholder or something that you use to sell it to your investors. AI employees as such don't exist because we as employees are actually very complex situations that I think in an ideal world, you wouldn't want to replicate.

Sean Kenny [00:32:59]: If I think of hiring and an actual employee, then hiring is one of the most imperfect workflows there is. Right. Because I have a team of seven people. I need to wait until I have worked for at least eight before I hire someone. And now I need to find the person with the right skills to do all of that work that's piled up in my ideal world. I just look at that work as jobs to be done. Right. And very much in the idealistic product thinking of, if I hire an analyst, one of his jobs is cleaning data.

Sean Kenny [00:33:26]: One of his jobs is to generate weekly reports, and there's many, many more things he does in between. But there are some chunks of jobs to be done. If we look at it from that perspective, and I believe that almost all jobs, you can break down into at least many of those, there's going to be some fillers in between, but jobs to be done. And we actually take a look at jobs to be done. And we try to do that categorically. Then almost every job to be done starts and ends, either with another job to be done, employee or in a system. And so a lot of what we are doing when we're completing our jobs is we create, transform, or move information around between systems. And just to make that more practical, we can look at something like, I might store my user research in Google Drive.

Sean Kenny [00:34:13]: So I'll jump on a zoom call. There's a call, of course, that one I'm at the moment doing, of course, myself. From that interview, I get a transcript and a recording and notes. And so those are already kind of pieces of information that are stored in the system. If I now wanted to get insights from those notes, I take those notes. I can either manually myself or currently. You do it within some sort of AI system, extract some insights, and those insights either directly go on the product backlog or they go into another place, maybe in confluence somewhere else. And you can break this down for almost every job.

Sean Kenny [00:34:45]: Right? So marketing, you have your campaign data somewhere in, let's say, Google Analytics or somewhere in Google AdWords, in any place that manages your paid ads campaigns. You take that campaign data, you analyze it to see what campaigns are doing well, and then from that campaign or from that analysis, you create again, a file in Google Drive, probably slides, or if you use SharePoint, could be Microsoft, be PowerPoint, maybe a looker dashboard, something like that. Yeah, and you would use that. But I think the important part here is we're moving it into a report that we now give to someone else. Right. And that other person now looks at the report, comes up with a set of recommendations, which could be in an email. Another system that we then send back to the person that updates the campaigns in Google Ads.

Demetrios [00:35:36]: Yeah.

Sean Kenny [00:35:36]: Okay. And so in all of these processes, most of the things we've done is we've stored or moved information around in systems, and of course, we transform it along the way. And so the way we look at the work right now or at the kind of AI workforce paradigm is in an ideal world, not probably not the next five, six months, but an ideal world, we see that companies run drastically different with an AI workforce because the core components are. There are kind of three core components. One is the systems that we can use today, like your salesforce, your looker, your actual, your own code bases, GitHub, et cetera, the agents or AI employees that are able to complete tasks in between these systems. And so when we say AI employees, again, it's probably more on A jobs to be done basis. They might grow in scale, but these are basically connections between the systems in order to fulfill tasks that need to happen. And then the third part, and that's the architect role, is you'll probably get a subset of extremely talented domain experts who are able to, instead of doing the work they think of, how should my marketing department work? What are the different elements that need to be in place? And so they're going to be much more in the process of building these systems, building these employees, monitoring them, making sure that they're up to par and that they're completing the tasks properly, but it becomes much more of a machine that they're kind of designing and running rather than doing the tasks themselves.

Paul van der Boor [00:37:05]: Hmm.

Demetrios [00:37:06]: So I have to understand my field deeply, but then I also have to understand how and where I can plug in the assistance. In a way it reminds me of like, how is this different than Zapier?

Sean Kenny [00:37:21]: Yeah, so I think Zapier is difficult. So it's the right way of thinking about it because it is connecting systems.

Demetrios [00:37:31]: And let's just assume Zapier is a great tool and it works. Which we can go down a whole different tangent on how shitty that is.

Sean Kenny [00:37:38]: But let's not make that the topic. Yes, I think you introduce a couple new elements. Zapier integrations, even for technical people, are quite difficult because you need to, on a hyper nuanced level, understand every connection part of the system. And so these connections are very fragile. You need to deeply understand them in order to get them right. They're very prone to break if something changes. It's very difficult to build any sort of almost conditional logic. I don't want to say conditional logic because you can write your own code in between, but if it's not straight moving information, or always take this number and multiply it by two, this kind of, this cognitive aspect in between that currently employees are doing, that's borderline impossible to replicate with Zapier.

Sean Kenny [00:38:28]: And I think the way we're looking at the system now is as a platform, we need to provide the architects with the ability to easily implement the connections. And in part it's easy because if we abstract away connections with tools, then users should just be able to click write email. Right. And they don't have to understand all the endpoints that go into email creation. It's just write email as a tool in that agent and, and update looker dashboard. And some of these are more complex and some are easier and there's going to be a while for us to build up all the relevant connections. But we as a platform should abstract that and then users should be not too worried about the specific technical connections, but more about the flow of information and the actual completion of the task.

Demetrios [00:39:14]: And do you see it as something that the user is going to have choices when they're working with certain systems or tools that well, if it is Gmail, you have choices of writing emails, adding people bcc. I think about that.

Sean Kenny [00:39:30]: We have that today actually we're about to go live with our first set of integrations. So we've tried integrations in the past. Users scream about we can talk a bit about user discovery as well, but users right now scream of oh we want Google Drive or let's stick with Gmail, we want Gmail. And they're like okay, what does that mean? They're like I don't know, but I want it. Right. It's very abstract in many cases and users know that that might be interesting because that's where a lot of their work is, a lot of their pain is. But they don't know exactly what. And what we're building now is a user level authentication and integration system so that users can connect to their Gmail and we now allow them to select the right the relevant tools.

Sean Kenny [00:40:13]: And so the tools themselves have parameters where of course, because it's basically a layer on the integration and so we're a bit dependent on what the integrations make available through endpoints but for the most part agents get that information as in this is what the tool can do and so they can ask the user for questions. We'll build a learning over time mechanism for example, oh, you might always want your manager cc'd if you talk to this client, something like that.

Demetrios [00:40:43]: So the end user is going to have a select set of things that it can do with the different services. But on the engineering team on your side, if you wanted to, you could build out a tool or build out something that someone can do, right?

Sean Kenny [00:41:01]: Yeah, exactly. And the way we also communicate part of that to users right now is because let's say GitHub. GitHub is probably a good example and overload of endpoints. Right. So we're looking at 100 plus endpoints that the platform can have. We've selected what we think is the most relevant for now and build tools around that. And I think we are ending up at around 20. But even then that's something that users that isn't easy to navigate.

Sean Kenny [00:41:33]: Right. It's like okay, well you have list repos, you have get PRs you have get comments on PR, you have make new comment review PR and something else. Right. And so the way we're communicating at least part of that to users is we've started to group them into skill groups because we say in order for you to understand all of the nuances of these tools, it's going to be somewhat overload for most users. So we say, okay, if you want Token to be able to manage your PRs, click Connect here and it's going to automatically connect to six of these because it needs to get access to your repo, it needs to be able to read the comments, it needs to be able to add a new comment, et cetera. And so we started to group these, we'll adjust these along the way, but this kind of framing around skills is quite helpful and then it's becoming a skill set. Right. This version of Toka needs to be able to do these things in order to complete my task.

Demetrios [00:42:27]: So just closing the loop on the idea of the system and basically everyone needing or folks being able to think as an architect, you're now thinking whatever your job is, you have certain repetitive tasks in your day to day. And let's figure out how we can throw an agent in there to make your life easier.

Sean Kenny [00:42:57]: That's the starting point. Yeah. And yeah, sorry, what's next? What's next? Well, we first need to get this to work, right? So we're now in the loop of can we get this to work at all? Can we make it something that users can easily set up? And then can we make it so intuitive that it's going to be a no brainer? Right. And so we're still at the start of making it work. We're about to release the first wave of integrations which we already know that we need some sort of memory or learning mechanism for this because even things like tool call details. I'll give you a very, very simple example that I've been using to illustrate this. But right now if I ask Tolkan, please schedule a meeting with my manager, I'll get a sure, who's your manager? Even if I've done the same thing 10 times. And then I'll be like, oh, it's Yanis.

Sean Kenny [00:43:39]: And then it's like, okay, well what's Yiannis email? It's the same thing every time, right? And so it's actually just a very, very simple example of something that it's actually just one parameter in the tool call, but without this parameter we're not going to be able to do Anything. And if the ambition is to do complex multi step tasks between systems and we can't get simple parameters right, then we're not going to be able to do it. But I think this is an interesting point where we look at memory now actually quite differently than I think many other platforms. Especially if you look at how ChatGPT, et cetera, do memory. It's more personalization and we're looking at how can we learn to do tasks more consistently and in a better way by figuring out what the right inputs and outputs for tool calls should be. Power users are screaming for multi agent, they're screaming for observability and evaluations on these kind of more complex systems, etc. And then yeah, more beginner users are looking for marketplaces, right? They're like, I don't know how to start, I want to see someone else share it and then I can maybe copy and update it.

Demetrios [00:44:41]: Which is what I thought. I'm obviously at the lowest common denominator because I thought, man, when I go to Toqan, I would love to see, hey, you're in XYZ job. Here's a few great flows that you might want to get started with. We can connect this system with this system and then do this small task for you.

Sean Kenny [00:44:58]: Yeah, and that's coming when we have those connections and we know they work. It's been something we've steered away from before because it's very difficult to share something that's actually relevant to you. I mean it's very easy to overload you in things that someone in your kind of role has done. But because every employee's workflows can be so different and that's also why we think that the agent creation needs to be bottom up for the most part. It can be quite difficult to beyond examples, share. Hey, this is relevant for you because someone might work in tableau, you might work in looker, they might connect to a a specific database and you get your data fed from finance through an Excel sheet. Like your workflows, your ways of working are going to be drastically different. And this is probably one of the more similar kind of flows.

Sean Kenny [00:45:44]: And so sharing has been. Here's an interesting example where we make trade offs in the product we've built or one of the bigger things we've built over the last quarter is a configurable version of token. The assumption was if we allow users to configure a token for their specific task, not just the token that they have available by default and they can give it permanent knowledge they can give it instructions, etc. Then they're going to be able to build a more powerful version that even without integrations starts to be able to do some of those tasks. And while that's true, one of the things that we've not built out far is sharing. And so we thought, well, we will build out sharing when we know that this configurable version of token is valuable. We know right now it's still hard to set up. There isn't a lot of them that are easy to use.

Sean Kenny [00:46:34]: There isn't a lot of them out there that are providing a lot of value yet. There's still a big gap in what users need. And so building sharing now has no value because yeah, sure, you have the ability to share, but there's nothing to share. Right. So make it work first. And so we have the simplest version of sharing right now, which is a horrible user experience, which is the admin clicks on share, they generate a link and you can share the link with someone else. And now you can join that custom version of token user flow wise. It's far from ideal, but it works.

Demetrios [00:47:02]: And it's in its first iteration. I don't think you don't need to go that hard on yourself because it's.

Sean Kenny [00:47:09]: But yeah, sure, but it can be much better for actually quite a little effort. Right. But the concept of configurable versions of token isn't where it needs to be yet. So building out sharing, just like the marketplace idea is, it sounds nice, but the reality of it is probably going to be quite underwhelming in theory because.

Demetrios [00:47:27]: There'S so much customization. And that gets to this other question that was going on in my head on why you don't think about just verticalizing certain tools or token products. I know that you thought, all right, well, with data analysts we're going to verticalize that.

Sean Kenny [00:47:46]: Yeah. So it kind of just goes back to a bit of business reality where because of our target audience and we are somewhat limited in the address of a market in a way, so we have 30ish thousand employees, user experience, research. We have one big advantage in terms of our users, which is that because they're all portfolio companies, it's incredible to.

Demetrios [00:48:09]: Get on the phone with you.

Sean Kenny [00:48:10]: It's incredibly easy to reach. Well, they have to, they want to. There's a lot more trust immediately. There's a lot more willingness to spend, spend time.

Demetrios [00:48:17]: I think you're not trying to sell them anything.

Sean Kenny [00:48:19]: Yeah, exactly. We're there to listen to their complaints. And I think When I started, there was a lot of us reaching out at the moment. My first agent might be something to manage my inboxes because we get so many requests all the time from all kinds of different teams and it's hard to manage, but it's amazing in the sense of seeing how many users are actually interested in this. And the benefit we get from that is we get to take a step back and abstract a bit of, well, okay, lots of people are always asking for these things. I think this is kind of the product management dream of, okay, we have 200 users from different companies and different teams and different workflows and different departments asking for something that we could conceivably try to build like this. Okay. And then a lot of the user research balance is one tricky part is that, okay, we want to kind of build ahead for the future, right? So we want to build new things that are barely possible or not really easily conceivable.

Sean Kenny [00:49:12]: And so users are not really going to ask for that. At the same time, users see all of this AI technology and they're like, oh, I want an AI for my legal database. And you're like, I don't know what that means. But if you ask them, they also don't really know what it means.

Demetrios [00:49:26]: They just want to plug in AI because they know it's going to be useful.

Sean Kenny [00:49:29]: And then you go to them and you talk to them, you're like, well, okay, when you say this, do you want it to just have access to the documents? Do you want it to generate something for you? Generate and then generation? Should that work on a template or not? They're like, I don't know. And you're like, well, okay, It's a very iterative process. And I think one of the things we've learned is that we speak to users a lot and we have things like a Slack channel that has 1300 users in there that if we have an idea, we have a question, we just chuck it in there and we get lots of feedback right away. But what we've learned is build a first version as quickly as possible, like the integrations. Our assumption is that the user level authentication on top of a layer of APIs is good enough for many workflows. We'll see. So we want to give that to users as quickly as possible, see how they react, what they complain about, and then if we need more, we'll build more. If it's good enough, it's good enough.

Sean Kenny [00:50:25]: What we know is that we need to be able to talk to their systems so this is our first best guess at how we can do that in a way that we can actually scale.

Demetrios [00:50:34]: How are you using, break down the flow of how you are using token in your own job.

Sean Kenny [00:50:43]: So I have a flow that might make me. I think it's a home office flow. But I'm in the office every day and I'm saying that particularly because when I come to work in the morning, the first thing I do is I bunker down in a meeting room and I just use the voice input to just get out all the thoughts of my head into a structured list. It's just a, it's not a like feature wise, it's not that it's not that impressive, but it's a, it's a very interesting flow of just. I hate typing and I think every user cuts corners in typing prompts. But if I can just get out all my thoughts in one go, then I start a lot of the things that I need that I know I need to get done for the day. In the first 15 minutes of my day, I might start four, five, six different conversations. And all I'm doing is I'm providing the relevant context by just babbling on for five minutes.

Sean Kenny [00:51:37]: That creates a massive prompt that I can then use as a base for tasks that I want to do down the road. And this can, this can, this depends on what I need to do. But this can be creating presentations and then kind of reports and structures like that. This can be preparing for webinars, doing some research. This can be doing a product update. And so there's. I still find myself as my head being the best source for a lot of this data. I can't easily point it just as a Google Docs, but it's very easy for me to just summarize what's in the Google Doc and provide the Google Doc and then that's a great foundation.

Sean Kenny [00:52:14]: And then for example, presentations. There isn't a single great tool to build presentations I've seen. All of them are, I think my.

Demetrios [00:52:24]: Favorite so far has been Gamma.

Sean Kenny [00:52:26]: Yeah, I've seen that as well. But I think it does. It falls into the repetitive consulting style slides where it creates three columns that have headers. And it's always so far from something that I would want to use. And especially if you want to explain something, it's not easily able to visualize and make connections, it's more there to break down content and put it in a slide that you could present, but it's not easily telling the story. So now I've just Started having Token create HTML reports. And so now I'm going to have a roadmap session next week with our users. And the plan is that I just have an HTML page that I click through that I actually use as the presentation, which isn't a presentation, but it works a lot easier for Token to generate it.

Sean Kenny [00:53:13]: There's things like this data analysis. I don't write a single line of code. I'm banned from all of our code bases and making any technical comments. So any sort of kind of review of data is a lot easier for me like this than if I have to go to Google Sheet or Excel.

Demetrios [00:53:31]: Thinking about how you were talking earlier on the architect. Yeah. And what you would do when it comes to. You give this brain dump through a voice technology. It then transcribes it, you fire it off to prompts. You get that. How do you see that system working in the jobs to be done and all these tasks?

Sean Kenny [00:53:54]: Yeah. So I think there's two things missing for me to lean into it and eat our own dog food, which is just that one. You can't yet configure a version of Token that has integrations. We'll have to add that soon. And so right now there's just one Token and. And I can't have. Oh. This version of Token writes my product updates and it has all the right sources and this version of Token does my user research and it has all the right access.

Demetrios [00:54:17]: And so you see it as different token agents, like GPTs?

Sean Kenny [00:54:21]: I think so, yeah. It's a. Yeah. GPTs, different agents essentially, because we know that completing tasks especially like, because we're not really talking about read an email and response. Right. But in an ideal world, we talk about do a product update by looking at all the relevant things that we've said on Slack and look at all the things that have gone into GitHub, NPR's and look at the Jira board and then write it up as an email, send it to two people from the team to see if there's anything else they want to add, and then put it together as a Slack blocker that we can send out through a messaging system. And that actually is still a somewhat simple task because it's not simple, but it's very clear step by step. There isn't a lot of thinking in between.

Sean Kenny [00:55:09]: If I talk more on strategic planning, for example, it's like, well, okay, well, sure. I want you to know where the okrs are. I want you to find the latest product plan. I want you to be able to Get a current status of the product, I want you to get the data for the product, I want you to do outside research. So these are much more complex tasks to combine. And we know that at least with current technologies and model performance, that becomes possible. But only if you have an agent that is actually very tailor made to do that specific thing. Otherwise it can go all over the place, consistency goes down.

Sean Kenny [00:55:40]: So it might work two out of three times. And so configuring a version that is tailored for the job to be done is important and that actually allows you later on to share it. Right, because if I just share my Toqan, it's going to be harder because the likelihood of someone doing exactly the same jobs to be done that I'm doing is somewhat low. But managing a backlog, every product manager does. Doing user research, every product manager does. Right. And so that makes it also easier to scale.

Demetrios [00:56:08]: And then there's a version of this where you share yours with me and I can just copy it and swap out a few things because I want.

Sean Kenny [00:56:16]: To attempt a little bit different. You can. Well, there probably be two ways. One is just use it as well. So just join, use it. And that is probably more relevant for the company, like within a company. So if I have, if I have a data analyst in a company, the likelihood that the next data analyst has very similar systems and access and rules is high. If we talk about sharing across companies, it's probably more like a template or a blueprint that you could update and change out systems, change instructions.

Demetrios [00:56:45]: When we talked to Floris last time, he mentioned there was a hackathon you all did. And the folks that won the hackathon were a Jira agent. But then everything worked really nicely in the hackathon. But as soon as you plug it into a real world Jira scenario, the agent had no context, it didn't understand what was going on. How do you think about that with the integrations so that it actually works?

Sean Kenny [00:57:13]: Yeah, I think the hackathon we did was, I think it was the first time we did a hackathon around tool building. So there's a lot of things we learned around tool building. Number one is that if we build a tool, assuming how a system works, then things are not going to turn out well. And so one of the problems is that if we build a Jira agent ourselves as a team, it's another reason not to verticalize. And we just assume everybody uses Jira in the same way, or all the boards are the same, or everybody has a clean backlog or Everybody uses labels, whatever Jira looks like, then this is not going to work. And so our thinking is the core building block is access to Jira and in the configuration you can tell your version of token. Okay, well we just use Kanban. So there's one continuous sprint and there's just to do, refine to do in progress, done.

Sean Kenny [00:58:09]: And those are the only tickets that matter. And whenever you create a ticket, follow these instructions.

Demetrios [00:58:14]: So it's really about access to the integration or it's really about the integration itself, but scoping that integration and shaping it in the way that you want to use that.

Sean Kenny [00:58:27]: Yeah. And then the main part there is what are the actions that users want to take in systems? Right. So if we go back to the skills metaphor, then it's if I have my version of token, what's the particular skill that that employee should have? Right. It should be able to create the ticket. Okay, well and now it's more on contextually, how do you guide it to do the right ticket in the right moment of time? And that's another reason for this kind of knowledge learning. Right. It's parameters in a tool call saying this is the space we use in Jira, this is the board we use in Jira. And now you learn over time that when I say hey, hey, for the Toqan team, can you please add an infrastructure ticket that does XYZ, it's like, okay, well I know we don't use labels.

Sean Kenny [00:59:09]: We've learned that before. I've made that mistake before. I just gonna put a square bracket infra in front of the ticket name and I'm gonna put it on to do with the same board that we use for everything else. But this is learning over time. And this is also our first draft and there's things that we'll be able to make easier if I think on more like the two year horizon and also at process more like an investor perspective. Then the really interesting part comes in when we can say we can buy a company and within a month we can start deploying an agent fleet across the organization. And that probably means some crawling and indexing and learning from existing data and not having to rely on only users.

Demetrios [00:59:51]: So crawling and indexing, talk to me. Well, that's an interesting piece too and it's fascinating to think that you could do that. Learn from the systems that are already in place, learn how they set up their Jira and what kind if they use tags or if they just use brackets.

Sean Kenny [01:00:08]: So it's a, it's something we've deliberately not Touched because, because looking at the current set of hypotheses, we say, well, the first AI employees are going to be bottom up. And that inherently means bottom up and very, very narrow in scope and again micro. But. And because of the AI employees that we've been looking at as examples and as use cases for users to build up, the relevant context is almost always hyper focused on just completing that task. So when I do that, it has absolutely no, let's say I'm doing a flow somewhere in product. It matters not at all that currently sales has an event ongoing somewhere else in a company or how an operations team might structure their dashboards. Absolutely no relevance to this specific job I'm doing. So context in that sense is overrated, as in company context, I think maybe hot take, but the specific context is the most important part for completing the job.

Sean Kenny [01:01:09]: Again, probably current assumption is that that's tool call specific. We currently think that the best way to get that out is through user interactions first, because we wouldn't know where to start when crawling. I think crawling is such a huge topic and you can. Almost every system is, I don't want to say poorly managed, but if you look at any Google Drive, it's a mess and if you look at any confluence, it's a mess. And Jira and GitHub has whatever. And I don't want to talk about Salesforce at all. So these are things that are very difficult. And I think almost one of the things we're doing right now is we built a golden test set that says, well, we've been talking to this user for 3 months in order to complete this work and now it's running completely autonomously and these are the things we've learned.

Sean Kenny [01:01:57]: Let's scrap that and say what happens when we crawl and index the system? How can we learn something that gets us close enough in order to say, okay, this is a good starting point, but we don't have that test set, so it's not something we're going to start with.

Demetrios [01:02:12]: So basically crawling and indexing just creates so much noise.

Sean Kenny [01:02:16]: Yeah, and maybe crawling and indexing isn't the right way of looking at it. It's really trying to understand reading and extracting. Right. It's, you want to, you want to extract the relevant pieces of knowledge. And I think right now we don't know the rules for that.

Demetrios [01:02:31]: It goes back to there's certain paths that are taken, there's certain signposts, and if you think about the 8020 principle, there's probably 20% of the stuff that you're doing on Jira is very important and you know that. But the other 80% is still there. And so if you have everything weighted the same, it's going to create a horrible system for the agents.

Sean Kenny [01:02:56]: Yeah, end users might work in different ways. And so then do we ignore it? Do we take one? I think there's another actually really important point that I skipped at some point, which is many of these agents will require the user to redefine how they currently work. My current, and again on a very specific process level, my current way of writing product updates is incredibly chaotic. It's very messy. It's not a process at all. And the reason is I don't need a process. I know where I need to go when I do it. I go there, I go here, I look at, I talk to the team and I get it done.

Demetrios [01:03:26]: But that's not agent friendly.

Sean Kenny [01:03:28]: It's not going to work for an agent and it might not be best. Right. This is very much a human nature thing of like, I mean some, some product managers would have built a process around it. I'm more non process, so I wouldn't have done it in the first place. And I'm going to not do it until I have to hand it over probably. But when we talk on the learning from existing knowledge, we basically just look at existing processes. We don't have any space to redefine these. Right.

Sean Kenny [01:03:52]: And so very much the architect comes back into place and saying, well, maybe the right thing to do is of course we can look at what's there, but maybe what we should do is say, okay, this looks like what it is. This looks like it should be the jobs to be done rather than this is the rules to replicate exactly what you're doing right now because they're replicating exactly is I think more likely to not work than giving a suggestion of how you could refactor the process work. Right. I think that's maybe an interesting view on this. So the reading and learning and extracting knowledge inherently takes all the existing things as gold standard, or at least in a first version, which might also not be a good way to go.

Demetrios [01:04:33]: That's all we've got for today. But the good news is there are 10 other episodes in this series that I'm doing with Prosys deep diving into how they they are approaching building AI products. You can check it out in the show notes.

Sean Kenny [01:04:49]: I leave a link.

+ Read More

Watch More

LLMOps and GenAI at Enterprise Scale - Challenges and Opportunities
Posted Feb 27, 2024 | Views 747
# LLMs
# GenAI
# NatWest
Mercado Libre Gateway: Challenges in GenAI Adoption and a Use Case
Posted Mar 11, 2024 | Views 453
# GenAI
# Adoption
# Mercado Libre
Challenges in Deployment Automation for AI Inference
Posted Mar 15, 2024 | Views 984
# Deployment Automation
# AI Inference
# Perplexity AI