MLOps Community
+00:00 GMT
Sign in or Join the community to continue

Building AI Products across Multiple Domains: Commonalities & Non-Commonalities

Posted Mar 04, 2024 | Views 558
# AI Products
# Applied AI
# Uber
Share
SUMMARY

"I will walk through some of the key things that I have noticed in the space of Applied AI - what it takes to build AI into products and surfaces that do not contain it. How do you persuade partners of value? How do you get things done? What pitfalls might you run into, and how do you solve them?"

+ Read More
TRANSCRIPT

Building AI Products across Multiple Domains: Commonalities & Non-Commonalities

AI in Production

Slides: https://docs.google.com/presentation/d/1unUcwXSmqf70SxzQAY_zWKlQ9DMwL999m97tqQGvzc0/edit?usp=drive_link

Adam Becker [00:00:06]: Next coming up is Dhruv. Let's see, Dhruv, are you around?

Dhruv Ghulati [00:00:09]: Yes.

Adam Becker [00:00:10]: One, two. Okay, I think you're live.

Dhruv Ghulati [00:00:13]: Yes.

Adam Becker [00:00:15]: All right. And Dhruv, are you going to share? You're going to share your screen as well?

Dhruv Ghulati [00:00:19]: I will share my screen.

Adam Becker [00:00:21]: Awesome. So you're going to talk about what it takes to actually bring AI value to surface areas that normally don't have it. And I'm stoked to hear what you have to say about this. As soon as you share your screen, we'll add it to the stream. Here it is. Okay. Awesome. This is coming up.

Adam Becker [00:00:37]: I'll show up in ten minutes. Thank you very much, Dhruv. Take it away.

Dhruv Ghulati [00:00:43]: Cool. Hi, everyone. So my name is Dhruv, and I'm going to talk about, effectively, some of the things I've learned building AI products across lots of different problems, spaces and domains and kind of general high fit, 100 foot kind of like, what are the key things that are common in building AI products? And when you look at different AI problems, where have I noticed that things can change a lot and that sort of adaptation that you have to make for different companies that you work for problems that you're looking at. So who am I? I started my career as an investment banker, and I've worked kind of across the stack of different sizes of company. I started off at a company that was doing kind of information extraction, so using AI to extract data from web pages. I have been at a five person AI startup that was doing intelligent assistants. Then I took my machine learning MSc from UCL, and I also founded my own startup. So I had this journey of kind of building a team of 2030 people doing automated fake news detection.

Dhruv Ghulati [00:01:53]: And then I was at a company, about 600 people, and I'm now at Uber and I run all of our applied AI product management. So I've worked in lots of different problems. And I want to give this context because to kind of frame the next part of the talk, but I've worked on things like extracting things from driver's licenses and grocery receipts. So text extraction. I've worked on recommender systems. I was at Bumble, and I was responsible for thinking about how do we effectively build utility functions and relevance ranking for dating. I built various forms of intelligent assistance, obviously, at Uber, so you can imagine there's something relevant there. Harmful content detection.

Dhruv Ghulati [00:02:38]: So that's in text kind of understanding if content is harmful or hyperpartisan, biased fraud detection on documents, fact checking, even sort of the ML side of things as well. And how do we build annotation tools? How do we build systems for declarative forms of mlops and a bunch of other things that are unrelated sort of classification recommenders and extraction. Things like that are more sort of optimization function based problems. So, for example, at Uber, I work on things like helping building heat maps for drivers to understand where they should go to earn the most. Also understanding the right way to ramp up pricing for careers when they're given orders to optimize an overall kind of goal for our career pricing. So a bunch of different things. So I think one of the things that I think is, I wanted to start with how does an AI product manager, from my experience, kind of differ from the traditional product person? I think this is really relevant to this conference. I think my experience, especially at Uber as well, is kind of thinking about what does a UXPM kind of think about? And there's a lot of functional versus non functional requirements.

Dhruv Ghulati [00:03:59]: There's different discussion about how to split up different types of requirements. There's much more focus on design sprints. There's this element of sort of how do you present the problem space? Is it kind of opportunity driven versus problem driven? There's a lot of experience and kind of know how that's built through the whole product world around agile scrum, all these different types of methodologies, user requirements, stories, jobs to be done, really painting these kind of like elaborate five year visions for things. And then there's a lot of focus on kind of a B testing and XP processes. I think if you look at AI product management, I think the things that I've noticed differ in terms of just buzwords that we all kind of probably have worked with as AI product managers. A lot of our terminology that comes up in our heads are things like training, data strategy, post launch monitoring. How do we build maybe orchestration layers so that users can kind of access our AI products and build and combine them in different ways. What components go into the AI product that we're trying to build? Maybe we need to split up the problem in different ways.

Dhruv Ghulati [00:05:12]: Do we combine them with ensembles or some sort of hybrid systems annotation strategy? I think the things that are common in product and AI product are really around kind of like both of those things. The most important thing I've noticed is really we still have to have the same deep level of rigor on success metrics for the product that we're trying to build. And also how do we actually test that it's working. So I think experiment process a b testing is still really relevant in AI product management. So I think the things that I've noticed, key skill sets of AI product managers that I've had to learn through the years, is really being able to break down problems into really detailed but simplified parts, kind of componentization, looking at a high level view and sort of trying to think, okay, how do I need to break this down? We can go through some examples. I think clarity of expression is really, really important as an AI product manager, particularly so because what you're trying to explain is extremely technical. I think the other big thing I've had to learn is sort of in this field that's pretty new, a lot of the people who are building AI is obviously started in research, and there's an inherent desire to invent new things and push the boundaries of research. And as an AI product manager, your key skill set is try to kind of deal with the personalities that exist there and try to kind of explain the benefit of shipping things in production.

Dhruv Ghulati [00:06:45]: I think there's a real focus on dealing with uncertainty of outcomes, particularly with stakeholders that you have to deal with. How do you kind of think about sprint cycles, what methodologies, talk about agile scrum, what are the methodologies that work really well for an AI development process? And I think obviously keeping on track with all of the technical developments that are taking place, which probably like a UX product manager, focused product manager doesn't have to do as much. Now to the focus, commonalities and non commonalities I've learned in AI product management. I want to walk through some examples. So here are some things that I've had to deal with when thinking about AI problems. So, in the field of document transcription or fraud detection, one of the companies I was at had a team in place, for example, where they had fraud specialists who were able to kind of evaluate very clearly if a document had been tampered with, if a photo had been swapped out, if a corner had been kind of erased in a document, and that might be a form of document fraud. In another company that I worked at, there wasn't that kind of fraud expertise. So with that, how do you think about how do you build golden data sets for, how do you evaluate your systems? On the one hand, you have these experts that are available, on the other hand, you don't.

Dhruv Ghulati [00:08:12]: And so there's some interesting problems that you can come across around. How do you build evaluation systems based on the resources that you have? And how do you form intelligent strategies for kind of relabeling, passing things that you're not sure about, to different queues, to kind of get a sense of human accuracy. In one situation, our rollout process for the AI product was very much reach an average level of accuracy, and then have everything passed to humans for a hybrid check on things like fraud and transcription. On the other case, we don't roll anything out until we're beating human accuracy, which is a very, very high bar and sort of a one shot launch process. Document fraud. In one situation, you might be thinking about building in house systems. You have that fraud expertise. You also understand what different types of fraud exists, so you can actually build systems to synthetically generate fraud for your training data.

Dhruv Ghulati [00:09:09]: On the other hand, you don't have that. It's not a core part of the business. So how do you think about build versus buy? In the case of natural language understanding, I built a company that was trying to do things like detect stances of different statements towards given issues. So is this pro climate or an anti climate stance? And we had to build lots of training data handcrafted for these types of statements. Right now, with GPTs, you can effectively ask a question like, is this statement pro climate or is it not? And you get that answer out the box. And really your work is not hand coded labeling, but prompt engineering. Another way is just to craft the AI problem. Commonalities and non commonalities when you think about goal optimization, often with kind of Markov processes or bayesian kind of systems, you have to think, are you in a two sided marketplace or a one sided marketplace? And this is one of the things I've learned, particularly at Uber, just because we're optimizing a given problem and we're trying to build the best solution for that, we have to think about the marketplace in particular.

Dhruv Ghulati [00:10:16]: So, for example, if we're trying to drive earners and drivers to a particular location on a map so that they can earn more money, what is that going to do to the marketplace supply in that area and the demand? And we have to think about all sorts of different levers and factors that might be affected with our promotions and deals and offers and so on. In the case of labeling, right? In the case of some AI problems, for example, transcription, you know what the first name of a driver's license is. But in the case of a problem, like trying to detect if a bias detection in news articles, even your labels are uncertain. And finally, one thing that I've been finding really interesting in the case of this current paradigm that we're in with AI product management is we were trying to build a system. So I was involved in the product that was trying to predict you would have a prompt like let's go to the movie that we were talking about last week. You would have these hard coded responses that we would code up before building and launching the product. Now, with llms, you can actually build test sets of prompt and expected response. You can use an LLM to even generate the test data for you and even evaluate and optimize the prompts that you're testing on.

Dhruv Ghulati [00:11:31]: So llms are effectively now being actually built into the actual development process of the AI products internally, which I find really interesting. So I think this is a bit of a lightning talk on kind of some of the things that I've been learning through my journey in AI product management, but very open to questions as well.

Adam Becker [00:11:50]: Nice, Dhruv, thank you very much for all of this. I hope people can find you in the chat afterwards and on slack as well. I have a feeling that AI product management is just going to be a field that continues to just blow up. It is fascinating and I think it's going to be relatively orthogonal to a lot of just like traditional way you put it, like UxpM. I think there's so much work to be done there and a lot of thinking. I thought it would make for a great podcast. I don't know if anybody's already done like a podcast on AI product management, but I think that the market is ripe for one. Dhruv, thank you very much for coming.

+ Read More
Sign in or Join the community

Create an account

Change email
e.g. https://www.linkedin.com/in/xxx or https://xx.linkedin.com/in/xxx
I agree to MLOps Community’s Code of Conduct and Privacy Policy.

Watch More

24:10
Building Defensible Products with LLMs
Posted Apr 27, 2023 | Views 1.4K
# LLM
# LLM in Production
# Humanloop
# Rungalileo.io
# Snorkel.ai
# Wandb.ai
# Tecton.ai
# Petuum.com
# mckinsey.com/quantumblack
# Wallaroo.ai
# Union.ai
# Redis.com
# Alphasignal.ai
# Bigbraindaily.com
# Turningpost.com
Building Effective Products with GenAI
Posted Nov 02, 2023 | Views 554
# Effective Products
# Generative AI
# LinkedIn
Building Products // Panel 2
Posted Jun 28, 2023 | Views 422
# LLM in Production
# Building Products
# MLOps
# Redis.io
# Predibase.com
# Humanloop.com
# Anyscale.com
# Arize.com
# Nvidia.com
# TrueFoundry.com
# Premai.io
# Continual.ai
# Argilla.io
# Genesiscloud.com
# Rungalileo.io
# Gantry.io
# Zilliz.com