MLOps Community
+00:00 GMT
Sign in or Join the community to continue

Knowledge as a Service // Prashanth Chandrasekar // Agents in Production

Posted Nov 15, 2024 | Views 690
Share
speaker
avatar
Prashanth Chandrasekar
CEO @ Stack Overflow

CEO and Board Member of Stack Overflow, the world's largest and most trusted community and platform for developers and technologists.

Previously, served as Senior Vice President & General Manager of Rackspace’s Cloud & Infrastructure Services portfolio of businesses, including the Managed Public Clouds, Private Clouds, Colocation, and Managed Security businesses. Also held a range of senior leadership roles at Rackspace including Senior Vice President & General Manager of Rackspace’s high growth, global business focused on the world's leading Public Clouds including Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP) and Alibaba Cloud, which became the fastest growing business in Rackspace’s history.

Prior to joining Rackspace, was a Vice President at Barclays Investment Bank, focused on providing strategic and Mergers & Acquisitions (M&A) advice for clients in the Technology, Media, and Telecom (TMT) industries. Was also a Manager at Capgemini Consulting, focused on leading operations transformation engagements and consulting teams across the US.

MBA from Harvard Business School, M.Eng in Engineering Management from Cornell University, and B.S. in Computer Engineering (summa cum laude) from the University of Maine.

+ Read More
SUMMARY

The internet and its business models are changing. For the last 16 years, Stack Overflow has been at the forefront, helping to shape the future of the web. In this time of disruption comes a necessary time to reflect, change, and break norms. With the expansion of generative AI, many LLM providers are not allowing the world to operate as it has previously. Instead, a new paradigm has emerged: content created by thousands of creators across the Open Web is now being used to train models without respect or attribution for the original creator.Consequently, in the current transformation, human-centered sources of knowledge are obscured but a key component of the AI stack. Companies and organizations lucky enough to host these engines of knowledge production are at a decision point; how do they continue to grow and invest in their communities when the technological landscapes have changed. A change in strategy will allow these companies, AI tools, and the communities and data sets that power them to thrive.

+ Read More
TRANSCRIPT

Paul van der Boor [00:00:06]: All right, welcome everybody to track three of the Agents in Production conference. I'm very excited because we are joined today by Prashant, the CEO of Stack Overflow and we've been talking a lot about AI in the last couple years together. I know you've got a a very exciting view on the role of Stack Overflow, on the role of AI for software developers and the future of all those things coming together, which I think you've called and titled in the talk Knowledge as a service. We have about 15 minutes and folks, we also have some time for questions for Prashant. So I'll be coming back in 15 minutes to join you and moderate the session for the Q and A and in the meantime the floor is yours. Prashant, thanks a lot for joining us all the way. I know it's early where you are now.

Prashanth Chandrasekar [00:01:01]: All good, thank you. Paul and I really appreciate the opportunity to talk to this wonderful audience and share a little bit about our view on Stack Overflow and how we think Genai is going to make a difference in the world and what the problems that we're primarily focused on solving. With that. Let me get going with I suspect many of you or most of you know Stack Overflow. So we're obviously the world's largest software developer, community and platform. We've been around for 15 years. We have close to 60 million questions and answers on every possible technology topic. Questions are asked very frequently on our platform.

Prashanth Chandrasekar [00:01:40]: We have something like 69,000 tags of information that organizes our content. So very structured knowledge and data. And in the context of AI and ML, we have something like 40 billion tokens worth of data on our public platform that's been accumulating over time, which is really exciting for us as we think about the AI world. We also obviously have the overall not only Stack Overflow, but our Stack Exchange sites that are close to 160 sites. So with all sorts of other technology related topics and even non technology related topics that are all in this really, really well structured, dynamic and diverse dataset that's been created over time. No doubt our user base comes from literally 185 countries from all around the world. It's just been phenomenal to see the impact that we have around the world and the number of people and the people from various regions that we help as part of our mission in terms of our products. Just to sort of make sure that all of you are aware of the current state of our products.

Prashanth Chandrasekar [00:02:54]: Obviously we have on the bottom here our public platform or community knowledge base and on Top of that, we have two primary product lines. One is called the Enterprise Products line and the other one is ads products. And I'll go into a couple of these in a bit. But our Enterprise products is effectively Stack Overflow for teams with additional AI generative functionality that we effectively sell into companies. That is a private version of Stack Overflow that's used for knowledge sharing and is effectively a private knowledge store within your company for all your critical information, specifically focused on technologists within companies. And so that has been around since 2019, and we've enhanced that most recently with our AI functionality. The box in the middle that you see is something that I'll spend quite a bit of time on today, which is our newest offering called Overflow API, which are our AI and data infrastructure products that we are effectively now in the market with and has received tremendous interest. And we've got great tractions and partnerships that we've created, which I'll go into much deeper.

Prashanth Chandrasekar [00:03:59]: And then finally, of course, advertising, which is our oldest business line, because obviously we have a lot of folks who come to our website from around the world. So it's a great place to showcase your AI products as well as broader technology products. In terms of the problems that we are trying to solve in an AI context, they boil down to three things. One is the LLM brain drain, which is all around making sure that if humans don't stop sharing original content, then how do LLMs keep training? And of course, there's some level of debate around things like synthetic data these days, but we are convicted. That element absolutely need new information, novel information, information that only humans can create to continuously progress in the context of increasing accuracy and just the effectiveness of the models. The second one is around answers are not knowledge. What we mean by that is, in this current landscape, it looks like a lot of the AI tools do hit a complexity cliff. It does obviously address a lot of the early, simpler questions, but at some point they do tap out on being able to answer questions.

Prashanth Chandrasekar [00:05:05]: What happens when your users are trying to get more advanced answers to advanced questions? We definitely would love to solve that in the context of what we do then. Finally, there's a fundamental trust issue with using AI tools. I think you're noticing this increasingly now as people make their way through the S curve and there is still kind of a slowdown in AI adoption and especially large enterprises and companies. And so people really want to be able to trust that they can rely on these tools consistently when doing something. For example, production grade within a bank, as an example and this trust point really came to the surface when we looked at our developer survey that we conduct every year and, and typically about 60 to 100,000 people respond from our community. And it was very clear that both last year and this year, 70% of the folks currently plan on at least trying to use AI tools or are already using AI tools for software development workflows. However, only about 40% of them seem to trust the accuracy of these tools. And this percentage hasn't really changed over a couple of years.

Prashanth Chandrasekar [00:06:18]: So there's still some reticence, no doubt about the level of trust that you can, that you can actually give to these tools. So for us, we've come up with these sort of five guiding beliefs as we think about the future of development of AI tools. So firstly, we really believe cost management will be a really, really important consideration for companies because these tools are becoming expensive. So that's going to be important. These foundational models, number two, we believe will become a commodity every day. I know all of you are in the deep end of this, which is to see all the developments, whether that's from llama, the open source world, or whether that is a close propriet proprietary model from OpenAI, there's definitely a race that's on. And we ultimately believe these models will become a commodity as we move towards an agentic world. Thirdly, personalization and your data within your company especially, I think is going to make it absolutely critical for differentiation.

Prashanth Chandrasekar [00:07:13]: And I think that it's a source of differentiation which is really, really powerful as companies think about how to compete with in an AI context. Model evaluation is also all about roi and that is absolutely true. In the enterprise conversations that we have, they're able to companies and CIOs and CFOs are thinking about what return does this actually produce for me in the context of productivity as an example. And then finally, legal and ethical, those debates are going to absolutely continue. Who owns the data privacy and attribution back to the original stakeholders? We clearly have a point of view on that which I'll talk about and that will, I think, also grow in intensity. So we've nailed four different concepts in the context of having this community data in the center, which is obviously something we focus on and we really believe in this notion of socially responsible AI, which is to do the right thing in the context of the ecosystem, so that we ultimately want to progress with the technology community and leverage AI functionality, but it needs to be done the right way, in a responsible way to make sure that they contribute Back to the communities where they have been grabbing data as an example to train models, et cetera. Secondly, the race to accuracy I think is very, very important because it has created this very clear need for highly effective sources of data to increase the level of accuracy, to increase ultimately that trust, as I talked about. And that is a second focus area, the LLM wars.

Prashanth Chandrasekar [00:08:47]: So we not only had the big companies, you know, with the large language models, you have all sorts of specialized small language models being created with proprietary data sets. So I think there's going to be an explosion in terms of the number of companies looking to go and build AI powered capabilities, especially at the foundational level to progress their mission. And ultimately, of course, corporate customers really have a, you know, a very low threshold for anything that's somewhat risky, especially with things like hallucinations, et cetera. So absolutely this is important in the having accurate data for companies is supreme. So the ecosystem more generally, what we have done is responded by effectively coming up with, you know, along with companies like Reddit, Stack has effectively come up with a licensing model where we are working with these big AI companies so they can officially leverage our data to ultimately leverage the human generated data to create that feedback loop that companies and LLM companies can leverage for their ongoing development of their AI models and LLM models. And so this is I think, a really a moment where the Internet, while it will continue to stay open for the vast community of members, it will, I think, has very much become a closed Internet as it relates to companies partnering with other companies. So I think the intent for us is we've always want, we're big believers in open source and we're big believers in staying open as a community and we certainly will do that for our community members. But as it relates to corporate companies leveraging data sets, it's now, I think, a very, very different world and it has become somewhat of a closed Internet ecosystem in the context of, you know, it's also just a couple of the views on the gen AI tech Stack.

Prashanth Chandrasekar [00:10:36]: It's very clear that web data sources, along with things like chips and model APIs like the LLMs and of course the cloud environments and then you have all the applications and tools that sit on top of that. It's very clear that web data sources like Stack, we're one of the biggest data sources as I've described, are critical really for AI development. You can look at it in a different context as well. On the left side you've the Nvidia's at the world, at the bottom You've got the cloud computing companies in the bottom middle and then you have the LLM providers. Those LLM providers are very, very dependent on data and AI infrastructure in that context, data transformation and curation as one example, is very, very important. We certainly will play in this space as a result of what we're doing. And our data, as we've done our own testing, has suggested that when the quality of our data is extremely powerful. So we have looked at, you know, when we've tested our data, it shows a close to a 20 percentage point improvement on open source LLM models.

Prashanth Chandrasekar [00:11:37]: And this is actually something we did with the process team. And you can see that just with Stack data being used to fine tune, et cetera, it just absolutely changes the accuracy level and so it shows the efficacy of high quality datasets like ours. We've also looked at other external research on this topic. For example, the Meta Team or Facebook team created or published a paper about leveraging data sources like Stack Overflow. And you can see the human eval score went from like 5 to 9 and you know, or 6.1 to 9.8. So dramatic improvements in the accuracy level and just the kind of model performance when you, when you look at it from both these external benchmarks as well as our own work. And so it's very clear that data is absolutely a critical, critical element. And so we came up with this thesis of how do we really bring AI and humans together on our community to ultimately drive highly trustworthy attributed content that's highly accurate, personalized for the user, that has feedback loops.

Prashanth Chandrasekar [00:12:41]: We've been in the uploading, downloading game for a long time and then of course providing recognition back for the people that are, that are ultimately generating all this content. And of course this is foundational on our data. And we also went and by the way incorporated generative AI in a whole bunch of our Stack offer for teams, a private Stack Overflow enterprise. As I mentioned previously, with this functionality called Overflow AI, we have everything from semantic search capabilities or conversational search in our private interface, as well as the same capability in Slack and Microsoft Teams and even now in the IDE within Visual Studio code. If you think about the latest product that we launched, as I mentioned, is the Overflow API product. And what that's all about is to make sure that we provide with customers to partners and companies who by the way came to us when they saw that this is a very critical aspect of them leveraging our, for them actually getting access to our data. When we said, you know we're not going to stay open that way for companies. We now structured an approach where companies can actually leverage our data in a real time API where they get access to all this data that I just described to you, 60 million questions, et cetera.

Prashanth Chandrasekar [00:13:59]: And we've done it in three different forms. All the history of comments, the learning mechanisms, there's almost like an iceberg underneath the water. And I won't go through all the detail here, but you can see that there's a tremendous amount of information and knowledge that we're able to provide for our partners and customers so they can improve their LLM efficacy and just to share a couple of use cases, everything from correct answers from the millions of questions, to use cases like rag and indexing, to code gen and improvement of course to code context similarities between multiple tags. There's all sorts of ways we can slice and dice this information to make it really, really powerful either for coding or non coding use cases. Because just the format of Q and A and the depth and richness that we have collected over the past 16 years is very, very useful for model training and model for fine tuning. This gives you another sense of all the metadata, et cetera, from the past many years and all the ways in which we collect information. Our ultimate vision now is to make sure that we are able to meet the user wherever they are. Previously all of you are probably used to coming to Google, going to Stack Overflow via Google Search, that was just a user interface.

Prashanth Chandrasekar [00:15:18]: Now we're saying if the user is going to spend a lot of their time in Microsoft Teams and Slack and Google Gemini and ChatGPT and GitHub Copilot, let's be there, let's be exactly where the user is so that when they do in fact search for a question and they're able to get the answer from those places, as long as these partners are able to attribute back to the original source on Stack Overflow, we are allowing the user to go back to their flow state and the partner of course is operating with a socially responsible way with us or engaging in that way. Then, however, the answer is not available in that chat interface or through, let's say ChatGPT. They're able to ask our community for an answer to that question and they're able to ask our human community, of course, and the community answers. New information is created, the user obviously goes back, they get their answer as a function of being. Through that integration, the contributors who provide those answers are recognized and new knowledge is created and updated again with the richness that I've been describing with all the comet history, et cetera. And the world keeps going around as you can now train your models again with this updated information. So we're creating this, what we're calling knowledge as a service. We want to be wherever the user is and we want to make sure that they continue to stay in their flow state.

Prashanth Chandrasekar [00:16:33]: And we want to solve those problems I described earlier on so that we don't lose the user if they're in any of these tools as they are trying to get their answers to the most critical questions. So our mission is basically the same to just the user interface has changed over time and this of course means that we will partner in many ways with all the AI companies like the ones listed on the left. And that's exactly what we have done over the past several months. And so for example, in February we partnered with Google Gemini where we have struck an overflow API partnership with them. Similarly, we did one with OpenAI earlier in the year. Most recently we did one with GitHub copilot. So now you can get Stack Overflow knowledge straight and get a copilot as part of a plugin we have launched and then we very recently announced, which is not public yet, but with a top cloud hyperscaler, another partnership in a similar fashion. So we're just going about effectively integrating Stack Overflow in every possible AI functionality, AI tool where developers and users spend their time.

Prashanth Chandrasekar [00:17:38]: So that's in summary where we are and I think we're super excited that we play a critical role in the AI ecosystem and technology Stack as being through our vision of knowledge as a service, being wherever the developer is, and also of course being a critical component of the AI tech stack for the future. So with that I will conclude and you can always reach out to me anytime, either on LinkedIn or on Twitter. But with that I'll hand it back to Paul for any questions.

Paul van der Boor [00:18:10]: Exciting. Thanks for sharing that. And it's clear, I mean to anybody that's spending any time trading models at 40 billion tokens of the nature that you have at Stack Overflow are immensely valuable. Those are, you know, not all tokens are born equally and in particular in the case of, you know, Q and A around technical questions is, you know, those are 40 billion highly valuable tokens that, that I think you've explained pretty well how they are becoming part of the the broader gen AI system and your vision around that. If I look at the questions here, some people are indeed recognizing that as well. So Fedor comes in with the question, it's clear that there's huge value from the websites like Stack Overflow, but what is the future when everyone will be asking LLMs for. For the answers? And you alluded to that a little bit, but maybe I'll give you some space to comment on Fedor's question.

Prashanth Chandrasekar [00:19:07]: Yeah, so when the LLMs, when people are asking questions in these LLMs, as I showed with that circular knowledge of the service diagram, we want to be able to be one step behind previously, if the interface was go to Google and ask that same question and they landed on Stack Overflow because it was a top link. What we're able to now do is, okay, ask your question in ChatGPT. The answer is going to come straight in stat. ChatGPT, which by the way has been trained off of Stack Overflow data behind the scenes, which we have done through a formal partnership with ChatGPT or OpenAI more specifically. And if you click right now, actually if you go and ask a question, how do I do this on, let's say AWS Lambda, you could click on Sources and The answer from ChatGPT will source a Stack Overflow link, most likely, because it's. Most likely it's providing an answer based off of the original content from Stack. So effectively the user interface has changed and the whole point is for us to be wherever the developer is, if they're spending time in ChatGPT or any other gen AI tools, we'll be there so we can serve them.

Paul van der Boor [00:20:07]: That's exciting. And maybe a question that is related, which I can extend because you partially addressed it from Alexander here. He asked, I assume that LLM providers should be interested in Stack Overflow community growing further to mitigate brain drain. Do you have talks with some of the major LLM providers in setting up a collaboration? Will you address that and maybe you can comment on how you think about this in the future. And maybe an extension of the question is how do you think about Stack Overflow learning from those partners? Right. So which are the questions that get accepted in ChatGPT or upvoted? Is there, you know, have you guys thought about that feedback mechanism to also continue to grow, let's say the value of the data for the Stack community?

Prashanth Chandrasekar [00:20:50]: Yeah, yeah, for sure. I think so. There are a couple of things. One is we absolutely believe that these companies who are signing up for the socially responsible program that we set up to say, look, we're actually doing the right thing, not only investing back in the community, but it's also for your own good partners, because for you to be able to generate new information from the world. This is a great place to continue to sort of work on or partner with so that you're able to get new information, new questions that people have on their minds, which still are in the thousands obviously as people adapt to new technologies and so on and that is useful to them. So they have an incentive. All these AI companies and it became apparent, or cloud companies, it became apparent when we said look, we are not going to allow you to scrape or download our data dumps for any commercial purposes. That's absolutely something that we don't do allow them to do.

Prashanth Chandrasekar [00:21:40]: We obviously allow that for all our community users and obviously we are trying to play have this dual mandate of being open to our community but close to every technology company that wants to monetize off the base. So when, so that I think we've come up with a, I think a symbiotic relationship with many of these companies and we'll see more of these, that we will strike these partnerships, smaller companies also we have that we have an emerging offering for startups that are looking to leverage the data as well. So all the companies want the access to data. I think they have an incentive to keep us going so that this brain drain issue does not get worse. The second question is around how do these AI companies provide or how do we curate and how do we, you know, what is the nature of that data that we're able to gather? I think one of the ways, and I'll answer that is many of our partners have asked for the ability to, as I mentioned, provide not only be able to ask the question on Stack Overflow, but also potentially even answer a early AI generated from their own model. So for example, if ChatGPT or OpenAI can answer a question that is actually tricky to answer, let's say a human question that's been created, then imagine that you have multiple models that are basically trying to answer that question based on its best ability. And then that creates, let's say a 50, 60, 70% or maybe 80% over time relevant of high fidelity answer, which then by the way is edited by humans to continue the process. And then it creates, let's call it a perfect answer to that question.

Prashanth Chandrasekar [00:23:13]: So, so there will be a little bit of a. It'll be a great way to showcase the effectiveness of your LLM as a partner or as even a startup. If you're saying, look, we have the best knowledge on this topic and by the way, it's all somewhat circular, but it allows us to effectively play that neutral Switzerland role where your multiple AIs are able to provide early answers, let's call it, or draft answers, let's call it, so that then that can be completed by humans. One other place where we're doing this by the way, is where people are asking questions. On Stack Overflow we've created something called Staging Ground which is now completely AI powered where the gen AI is actually providing a kind of a friendly response behind the scenes in private to the question asker so that you don't have the, let's call it somewhat of a negative experience of being slapped on the wrist when you ask, let's say a basic question as I did when I first started using Stack Overflow and they said look, there's a duplicate question, you're closed out. But that's not going to happen going forward because we're using Genai to give you the friendly feedback early on.

Paul van der Boor [00:24:15]: Very cool. Maybe one last question given we're on time. I had one but I'm going to go for Geo's question here in the chat because I think it's closer to the theme of the conference which is agents in production. He says what about access to the API to Overflow AI for agents? Agents that can go into the Stack Overflow API to get the answers to their questions that they are struggling with as they're writing and executing code.

Prashanth Chandrasekar [00:24:45]: Yeah, I love the question. I think it's similar to our API program is of course what I described was the strategic partners on Overflow API. Then we have something called the emerging offering which is actually for smaller companies that are, let's say startups who are trying to do the same thing but let's say with more specialized use cases And I can imagine a world where it becomes a self serve capability where agents are able to tap into the API even if they're for commercial reasons and they're able to get this information but they're able to do it through a transaction on our website. That's not a very sort of fantastical world right now. The strategic ones are these big partnerships, humans to humans. It's kind of, I'm involved in many of these where we talk to let's say Google or OpenAI or any of my leadership team is and over time you can imagine like the long tail of companies, both not only smaller companies but also agents directly leveraging the data but just doing it in a more self serve way.

Paul van der Boor [00:25:41]: Very exciting. I mean as you probably see as well like the most mature agents seem to be in the software space. Right. Using all sorts of tools, combining basically the powers of agentic AI to help software developers do our work better. So I think that's an exciting starting point. Well, unfortunately we're out of time, Prashant, but thanks so much for sharing your view and for the folks online for asking the questions. Just as a quick update, we have a couple of minutes of cushion time as you can choose between the next talks in this track. So please take a minute and I think we'll continue in five minutes.

Paul van der Boor [00:26:26]: So thanks again, Prashant, and look forward to speaking again soon.

Prashanth Chandrasekar [00:26:30]: Great. Thank you, Paul. Thank you, everybody.

Paul van der Boor [00:26:32]: All the best.

+ Read More
Sign in or Join the community

Create an account

Change email
e.g. https://www.linkedin.com/in/xxx or https://xx.linkedin.com/in/xxx
I agree to MLOps Community’s Code of Conduct and Privacy Policy.

Watch More

11:43
Challenges in Providing LLMs as a Service
Posted Jul 06, 2023 | Views 691
# LLM
# LLM in Production
# Cohere AI
# Cohere.com
Building Reliable Agents // Eno Reyes // Agents in Production
Posted Nov 20, 2024 | Views 1K
# Agentic
# AI Systems
# Factory
Generative AI Agents in Production: Best Practices and Lessons Learned // Patrick Marlow // Agents in Production
Posted Nov 15, 2024 | Views 1.1K
# Generative AI Agents
# Vertex Applied AI
# Agents in Production