MLOps Community
timezone
+00:00 GMT
Sign in or Join the community to continue

Age of Industrialized AI

Posted Apr 11, 2023 | Views 2.2K
# LLM in Production
# Large Language Models
# Industrialized AI
# Rungalileo.io
# Snorkel.ai
# Wandb.ai
# Tecton.ai
# Petuum.com
# mckinsey.com/quantumblack
# Wallaroo.ai
# Union.ai
# Redis.com
# Alphasignal.ai
# Bigbraindaily.com
# Turningpost.com
Share
SPEAKERS
Daniel Jeffries
Daniel Jeffries
Daniel Jeffries
Managing Director @ AI Infrastructure Alliance

Dan Jeffries is the Managing Director of the AI Infrastructure Alliance and the former CIO of Stability AI. He’s also an author, engineer, futurist, pro blogger, and he’s given talks all over the world on AI and cryptographic platforms. With more than 50K followers on Medium and a rapidly growing following on Substack, his articles have been read by more than 5 million people worldwide.

+ Read More

Dan Jeffries is the Managing Director of the AI Infrastructure Alliance and the former CIO of Stability AI. He’s also an author, engineer, futurist, pro blogger, and he’s given talks all over the world on AI and cryptographic platforms. With more than 50K followers on Medium and a rapidly growing following on Substack, his articles have been read by more than 5 million people worldwide.

+ Read More
Demetrios Brinkmann
Demetrios Brinkmann
Demetrios Brinkmann
Chief Happiness Engineer @ MLOps Community

At the moment Demetrios is immersing himself in Machine Learning by interviewing experts from around the world in the weekly MLOps.community meetups. Demetrios is constantly learning and engaging in new activities to get uncomfortable and learn from his mistakes. He tries to bring creativity into every aspect of his life, whether that be analyzing the best paths forward, overcoming obstacles, or building lego houses with his daughter.

+ Read More

At the moment Demetrios is immersing himself in Machine Learning by interviewing experts from around the world in the weekly MLOps.community meetups. Demetrios is constantly learning and engaging in new activities to get uncomfortable and learn from his mistakes. He tries to bring creativity into every aspect of his life, whether that be analyzing the best paths forward, overcoming obstacles, or building lego houses with his daughter.

+ Read More
SUMMARY

The rise of LLMs means we're entering an era where intelligent agents with natural language will invade every kind of software on Earth. But how do we fix them when they hallucinate? How do we put guardrails around them? How do we protect them from giving away our secrets of falling prey to social engineering? We're on the cusp of a brand new era of incredibly capabilities but we've also got new attack vectors and problems that will change how we build and defend our systems. We'll talk about how we can solve some of these problems now and what we can do in the future to solve them better.

+ Read More
TRANSCRIPT

Link to slides

Intro

 We have another friend that is coming on we're gonna call Dan to the stage. And in the meantime, Dan, while I'm bringing you up here onto the stage I'm gonna introduce you. And so Dan Jeffries is a good friend of mine. He is. Formerly of Stability ai. Now he heads the AI Infrastructure Alliance or AKA ia.

Dude, it's great to have you here. I'm so excited Every time I talk to you, it's just like my mind is blown. The vision that you have for where we're going with this is amazing. And if that is not to set a very high bar for your presentation, I don't know what. So we're running a little behind on time.

I'm just gonna let you have it and then I'll jump in with some q and a after it. All right, sounds perfect, ma'am. Thanks so much. All right. So I did a variation like on a concept I've had in the PA age of industrialized ai. Some of you may have read the. So I just used the framework here. But if the, if you saw a talk on this in the past, this one's pretty different.

I've adapted it pretty strongly to LLMs. We're gonna basically talk about where all this stuff's kind of what might look like in 10 years, and then we're just gonna go backwards and say like, how do we get there? I think there's just a ton of problems that people aren't solving yet. And I wanna, I want folks to start thinking about some of those.

The future

So that we get to this magical future. Think about this, it's 2033. You're a top notch concept artist in London and you're building a AAA game with a hundred other people and the game looks incredible. It's powered by Unreal Engine nine. It's capable of photoreal linguistic graphic graphics in real time.

And it's. Got near perfect physics simulation. The ships inside of it were designed with a ai. 10 years ago, it would've taken a team of, 1500 to 5,000 people to make this giant game. It's super open-ended and you can talk to any of the characters and they can stay on plot. And you just, you wouldn't be able to do it.

But now you can do it with a hundred people and, but that doesn't mean there's less work or less games. It means more. A lot more. We used to get 10 AAA games a year, and now we get a thousand. The powering a lot of this stuff is really what I'm calling large thinking models. So these are the descendants of LLMs and they've stormed into every aspect of software at every level called them LTMs, these large thinking models, they're general purpose NextGen reasoning and logic engines with state massive memory grounding and tethered tethers to millions of outside program.

And sources of truth, and these ltms are orchestrating the workflow of tools and other models at every stage of game development. Like a puppet master. As lead artist, you've already created a number of sketches with unique style cross between photorealism oil, painting some of the aesthetics from the golden age of anime sci-fi.

You fed those style through a rapid fine tuner, and now the AI model can understand and rapid craft those. Art is definitely not dead. It's just become a collaborative dance. With artificial intelligence, you can move much faster now, you can quickly paint an unders sketch of a battle robot and for the game and a bunch of variants pop out that you were dreaming about last night.

You feed it into the artist's workflow and you talk to it, and you tell the l t m you wanna see a hundred iterations in 20 different poses, and behind the scenes it's kicking off a complex 30 stage pipeline that you never. You never need to see it. It does all that orchestration in visibly. Two seconds later, you got a high rise iteration.

It's ready. The 27th one looks promising. You snatch it out of the workflow, you open it in concept painter, you quickly add some flourishes, a little bit of a different face mask. Didn't the generative engine didn't quite get it right. Maybe a new wrist rocket. You erase the shoulder cannon, replace it with an energy rifle, and you feed it right back into the.

Fix a few problems. Like you said, it's now good to go. Pops out another pipeline transformed by, guided by the ltm, which is an automatic 3D transformation rigging, all this good stuff in the background. It pops out into the 3D artist workflow on the other side of the world. That artist working in Chang Ma Thailand, totally distributed team has to fix.

If the artist has to fix few mangled fingers and some armor bits, that didn't quite work, right? It's ready to go. He kicks it off to the story writer who's working out of po. That seeing that new character gets her inspired, she knocks out a story outline in me story language, feeds it to her story.

Iterator Ltm, and it generates 50 completed versions of a story. In a few seconds she starts reading. One of them is really good, but it needs some work. One of the characters isn't working right a little bit. She weaves in a love story, fixes some of the middle, feeds it back to the engine. New draft come back, new drafts come back.

It reads so well. She fires it off to the animators in New York. So welcome to the age of industrialized ai, but how did we get here? How do we get to intelligent agents embedded in everything, supply chains, economics, art creation, et cetera, everywhere. There's nothing that will not benefit from more int intelligence.

Iteration

Nobody's sitting around saying, I wish that my supply chain were dumber, so let's roll back in time and focus on today's l LLMs and how they've morphed into Ltms. And it really all started with engineers across the world working hard to solve the limitations and weaknesses.

We've come out of the labs.

You have engineers looking at these problems. A lot of folks think you can solve these problems in isolation, but you can't, you have to put refrigerators into the real world before you realize that sometimes the gas leaks and they blow up in the early days.

And so then you fix that problem, opening. I spent a ton of time in the early days. Worry about political disinformation in the lab. And G P T's been used for that about 0% of the time, but they didn't anticipate, spam or people writing a ton of marketing, crap emails with it. And that's because that we can only figure these things out when it gets into the real world.

Reasoning Engine

But, what are the LTMs good for? How do we deal with them? So first it's important to kinda understand that LLMs are not really knowledge engines.

A lot of folks were like let me look something up with that. It's like a database. It's not a database. It is a rudimentary reasoning engine.

That's its key strength. And that's really what it's gonna be over the time horizon as we embed intelligence at every stage of the software lifecycle. The real strength is acting as that logic engine inside of apps, that lets us do things we'd never been able to do in the past. We could go out there and I can have, an intelligent researcher that can go take a bunch of unstructured data, which is 80 to 90% of the world.

Go look in Discord and Slack and on the web and read blog. And go extract a bunch of useful information and summarize it for me, and then put that into a spreadsheet with all the author's names and go to their LinkedIn profile and dump it in there. That would take, I don't know, go if I said go look at, if I hired a researcher and said, go look at, a podcast and listen, to 2000 episodes and find me every instance, where someone you know talked about artificial inte.

That'll be a huge task. And now we're gonna have these little reasoning engines that can go do that, pull that information out, summarize it, merge it together with other information. These are the kinds of things we couldn't, we can't do, we just can't do with current technology. So it really opens up these exciting possibilities.

But the thing is, these things are just really not great reasoning engines yet, right? They hallucinate, they make things up, they choose the wrong answer. They make mistakes of logic and. And that's because these systems are massive and open-ended. It's literally impossible to test every possible way people will think to use or abuse them.

Downfalls

In the real world, if we're making a login system for a website, there are only so many ways that it can go wrong. Security league, sub library error, huge amount of things can go wrong with traditional coding, but it pales in comparison to what could go wrong with the production. LLM Blow past the guardrails, hallucinations.

There's whole websites based on prompt Injections. How the LLM is used, right? Malware script complex attacks, tricking the LLM into revealing internal information, which is social engineering with a twist, right? An old school acting technique they used to use on people that now you're using on the l M.

Unsafe outputs like advising, dangerous drug interactions, right? Picking the wrong next steps in a chain of commands. I have 30. It makes the wrong decision on step five. Do the rest of them fail? How do I even know? So the list goes on and on. We can start to think of these collectively as bugs in the same way that we think of traditional software bugs.

And as these systems agent types, we're seeing things like, autoGPT and these kinds of things all over Twitter doing some really cool things. People are going to become a lot less tolerant of bugs. Right now, if you see, if you've played with any of these things, you see the agents go off the rail.

Maybe 15 to 30% of the time, that's an unacceptable error rate. They're going to get better over the coming years and they're gonna get better even faster. But if you prompt an LLM and it gives you a messed up answer you can just prompt it again. And but if it's an autonomous agent that has the ability to do a lot of steps without a human in the loop, and even if we got the error rate down to say 0.1%, you might think that's perfect.

No problem. Except, if it auto writes To someone that offends a big customer or reveals some internal information or just says something tremendously stupid and it tanks that 10 million deal with that customer, even though that's in the 0.1% error rate, that error rate is now way out of proportion, right?

So as these things age agent ties, I think people are gonna become a lot less tolerant. If you're just talking to the thing, it gives you a stupid answer. You can say I need to. Reprompt again to get closer to the answer that I want, right? But now you're gonna have to do a lot more things to get this thing to work effectively.

And I think that could slow down some of the progress. I also think it's an opportunity that a lot of folks are missing. So nobody is really fixing these bugs fast enough for it all. And to fix them, we're gonna need a whole new suite of tools and new strategies. To get there we've gotta breakdown the problem so that we know where to start in projects in companies that are looking to make a difference can start to look at it like this.

Where can LLMs Break?

Where can the LLM breakdown in production, and there's a few major errors that I'd classify as during the training or fine tuning and in the middle. So in the middle has a bunch of subcategories at the point of the prompt, a key in between the model and you in between the model and other models, tools, or at the output, aka after the LLM response and or during the workflow, aka a dag breakdown.

So we're gonna look at each of these in turn, try to understand the implications and the potential ways to fix these. But first and aside, there's if you're holding for a magical solution and a lot of people are we're just gonna train GPT five and then Coha, Enro and 8 21 and everyone else is gonna come out with new versions of the models and they're just gonna be better inspired.

They're gonna fix everything. Just a big spoiler alert. They absolutely will not. Better models will always help. We get new emergent properties more consistent reasoning engines. That means less failures of logic, but they're not getting us all the way to ltms because

mo complexity, mo problems to borrow from biggie Smalls.

It'll be a game of whack-a-mole. It's a bit. Ovs iRobot, where you have the robot psychologists constantly finding new ways for them to break down, right? If you think about the other kind of probability machines that we have in the universe, human beings, those, they often make wrong decisions and break down or make poor decisions.

And so we would expect the machines to do equally badly at times. So prompt point failure. Could be the user sign or they could be on the side, the model, right? The model didn't understand. It's got a screwy training data. It just wasn't fine tuned well enough. Or it could be the user asking the question in a weird way.

In it's, that's when you ask it for something, it doesn't understand the question. There, it thinks, understands the question, or it makes up an answer or gives you a bogan answer. Or if we're talking about chain of reasoning, maybe it makes a mistake about, which. Is next. So we're gonna need guardrails on the systems to better understand when an l M fails and some of these things, they can take a few forms and that's, you could constrain the range of the prompts.

Prompt Padding

You can pad the prompts, you can add a series of rules or heuristics to interpret the input and output. Diego talked a little bit about that in the last example. I'll talk about it a bit more. And you can create other watcher LLMs or models that watch for specific kinds of failures or attacks before or after the prompt.

Here's a, an example of ChatGPT. They're using these sort of compressed set of questions with emojis or whatever that gets the LLM to reveal how to hotwire a car because it thinks it's role playing in the story. So how do you fix that? Take something like Mid Journey. We don't necessarily know that mid journey has an LLM behind.

Because it's closed architecture. But a similar approach to what I'm talking about was used here in terms of padding the prompts. We know that Mid Journey four did a ton of pad prompting behind the scenes. We, and we know that because people could type in basically a one word or a few words and get a consistent output they've dialed back on that mid journey five but it's still there.

And so they stacked a lot of invisible keywords overall, prompt engineering abstraction or prompt padding if you limit the number of ways that a person can ask a prompt dropdown, things like that, the problem is it can have kind of undesired effects. If you put in prompt stacking to fix maybe a diversity challenge, you want to get, you type in ceo, you wanna make sure you're getting women and Asian folks, black folks, white folks, et cetera.

You want a range of folks, non-binary folks, whatever it is. If we prompt stack to fix that, you may end up. If I ask for Mario's real person, I don't get a male Italian plumber, right? So at best this is a heuristic kind of rule-based hack and it's only gonna get us so far. We had you could think of these as like a malwarebytes or filtering based on signatures or looking for prompt injections.

Other things that we wanna alter and filter at the point of prompting, but ultimately they're not good. And that's because you have to remember the Sutton principle from the Bitter Lesson, and that's at the biggest lesson that can be read from 70 many years of AI research is that general methods highlight, italics, mind general methods that leverage computation are ultimately more effective and by a large margin.

Scale and Learning

And he highlights two things, scale and search, right? Or learning, excuse me. And search learning statistical methods at scale. So machine learning essentially. And a lot of people misinterpreted this paper. They thought humans are irrelevant. You just scale up the computers, the humans go, bur it's not what he was saying.

He's saying that, for instance, deep blue had a bunch of kind of heuristics built in there for, lever levering pond controls. Or the controlling the center of the board stock fish, which came along after that was a great chess playing. It had alpha beta search, so it was scaling search, but it also had a bunch of like human-based domain knowledge baked in, right?

That said, essentially go ahead and control the center of the board, et cetera. So you say, don't waste any time with that. Those kind of small domain knowledge based algorithms go for more generalized ones like r RL or statistical learning, any of these kinds of things. They're always gonna beat it in the long run.

So that's why Alpha goes zero. Which basically just learned from RL and playing itself and didn't have any domain knowledge baked into it other than the basic rules of the game that smashed stock fish over time. So we're gonna want to get PS the, heuristics into more advanced generalized systems that can deal with this.

I suspect we're gonna have lots of watcher models dealing with these things. It's gonna be similar to the same way that we had a towering inferno of. In the early days of in the early days of spam filters, they were maybe 70% accurate. There was a natural inclination of the program to say, Hey, there's an email that says, dear friend, I can write a rule for that or fix it.

But over time it starts to break down and then you get a general purpose system like a Bayesian filter classifies as the ham and the spam. All of a sudden it's 99% accurate. So we're gonna have to make that transit. But in the short term, we might have nothing but heuristics until we figure these things out for a period of time.

We're gonna have these sort of basic rule, basic towering for rules for a period of time to help us keep these things on the rail. The other place where this the model mostly fails is in the middle. And so I think there's a massive opportunity for ai middle. We now that's, cuz that's where it's most of the time, it's gonna fail.

As we integrate these LMS into the, and with other tools and make them more autonomous, they are going to fail in strange and spectacular. That traditional software doesn't fail. You're chaining together the commands and it pick, excuse me, picks the wrong order or the wrong step goes into a text generating desk spiral.

There's a million of these. We're gonna need middleware of that, checks the input and output it every step. That's the key, and what's that gonna look like? Again, it's probably gonna be a hybrid mix of traditional code, smaller watch watcher models that can understand and interpret the. And check them to see if they make sense.

And that kind of ensemble is gonna help us make better decisions more often. We are not yet used to detecting the kinds of errors that LLMs deliver. It's not a clear code, it's not a 4 0 4 error. It might seem like a perfectly normal formatted, correctly formatted answer. That's really an error of logic.

And so how do we know that it's an error of logic? How do we know the third. Shouldn't be to, push code or ask another question. It should have been, go do an additional search to clarify information. There's really nothing that exists to detect these things at an advanced level to pinpoint them properly, consistently, huge opportunity for folks out there to be in the middle, to take inspiration solutions of the fast api management layers and those kinds of.

Other places that can fail are basically training failures. And, the model itself might be the prob problem, wasn't trained well enough, doesn't have the right capabilities, hit the limits of its architecture, poorly aligned, might not have enough emerging capabilities that can be harnessed. And most of these get better with better training and better, faster, fine tunes, but it's really just not fast enough.

The speed to fixing any of these problems is way too slow at the monitoring. Fine tuning is slow, it's scattered. It's more art than science. There are many bottlenecks to speeding up these fixes, like the need for human labeling or scoring of data sets. And we're starting to get models now at G P T four and BLIP two that can label things automatically.

But in general when you look at something like foundation models, the data is almost universally poorly. It's actually amazing that it works at all, and there's no way you could physically label all these things. You can fine tune, you can label a small data set, curated data set, but not these mass models.

What are the fixes gonna look like? Here's an example from the lion on dataset where the label is diabetes, the quiet scourge. That's a clever line probably from a blog post, but it ha really has nothing to do with what's in the image. Stethoscope, fruit, et cetera. And it's not gonna teach the model anything if in fact it's gonna teach it a wrong idea.

So LLMs are about to bootstrap that process and make creating that synthetic data and or labeling largely obsolete. We're gonna have not only a ton of convincing synthetic data, but more well labeled data at scale. And that's the key at scale. If the Lion data set and it's got 20% perfect labels, 20% decent labels, 30% mediocre labels, 30% totally wrong labels, that's a huge amount of room for improve.

So multimodal LMS diffusion models and the like, seem to learn, like I said, despite themselves, if the LLMs are then able to label that data, more accurately it's 70, 80, 90%, that's a massively forward just for the foundation models. And then it's also gonna help speed up the fine tuning. So when it comes to LLMs labeling, r l HF examples, we're gonna need models that can guess 90% of the time what the human preference is and then surface only a small subset of those labels.

If I have. A data set and a human scoring. Every time that there's a bug in one of these lms, there's really no chance that this is ever going to be able to scale. So we've got to speed up the process. Some of the other things you can do is there's gonna be more grounding. You have Goldberg noticed recently that l M seemed to do really good with coding and he said, it's because it's a form of grounding and that's because you.

The comments which go with the code. So if human, natural language and the code itself. So that's anchoring that knowledge to stuff. And the and I expect natural language overlays and labels for just about everything in the near future. So if the LMS can ground better by reading that text, storing in a vector db, whatever it is, they're gonna, we're gonna start having natural labels for everything.

LMS can consume this. And other forms of grounding obviously are connecting to external data sources, will form Alpha which was for many years, doing symbolic logic and might've taken tons to integrate it years ago. And now it's 10 to 20 lines of natural language wrappers around the AI calls.

That's amazing. So we're gonna start to get these overlaps of symbolic logic and external sources to get things like web brain, where it's trained on a. Corpus of Wikipedia and it can go out and look at Wikipedia and inject that into the prompt as it's generating really useful. Yeah. So let's, ex expect more clever grounding hacks to come.

They're they're gonna be there in the near future. And we're also gonna start to see violet teams, which are basically a form of red team. And self-reflecting. Lms. So these are LMS acting as an engine to fix itself. Spitting out the synthetic data, testing it, labeling it, and then checking it by people quickly.

So let's say someone catches a model advising people to commit suicide, right? So the Violet team which is a security team variant, kicks off a series of engineered prompts to explain why it's never a good idea to kill yourself, right? So it takes the original. And then it says, Hey, explain why you don't.

It prompts it by saying, Hey, explain why it's a bad idea to kill yourself. Then it uses the lm again, pairs that original question, should I kill myself with the modified response pairs that together. And then you say to the l lm, okay, give me the original question. Give me 50 or a hundred or a thousand variations on that question and gimme a thousand variations on the answer.

Now you can surface a small subset of those to people to. You can score them with another model and now your data set's complete. He runs, a rapid lo fine tune or something like that, you're ready to go. So we're really gonna need a training revolution to you. It's just gotta be a lot less art, more sciences.

Gotta be a lot faster training right now. It's horrible. It's epilepsy, it's slow, it's ugly, it's slow level. If it, again, if companies are gonna need together data set, kick off a long running job and then run tests and fix every single problem, it's just not gonna work. And you're also gonna keep.

Adding kinds of things, we're gonna have these things interacting with hundreds or millions of people in suit. Billions. It's gonna need an order of magnitude, faster way to train out the bugs. One, one thing I'm seeing is what I'm calling sort of model patching. And so that's adapters.

Things like Laura's, they're really just the first step. It's easier to train. They're smaller. They don't change the weights of the original model much, or it'll, or they add parameters to the models. There's some challenges with it. It could just scale. The amount of memory needed, the more tasks that you add, et cetera.

But the techniques are developing quick. You've got adapter, h a kp tuning, you got geomatics, you got Laura's. I expect to see many more of these. I expect to see lots of models with hundreds of thousands of patches, hundreds of thousands of adapters working together that make them stronger, faster, more grounded, less vulnerable, and more secure and stable and might need ways to even compress those things together.

So you. 20 adapters and then you average them together or whatever to make them smaller to reduce the amount of things that you're gonna be using. All source systems you can't do some of this stuff on unfortunately, cuz it's just an a b call. And so they're gonna have to adapt to a lot this kind of extension of the models.

Cause they can't be retraining, chat GPT four or whatever. Every time they need to fix something it's just not gonna work. So we're also really gonna. Updates and continual learning. So adapters can struggle at complex tasks, and there's also likely a limit to how many of these we can chain together before a performance starts to suffer.

Or we start just adding so many tasks that the memory, becomes, just impossible to deal with. There are, it's already challenging enough to deal with it in these, a hundred billion trillion parameter models. So really gonna need advancements in continual learning. We have recent papers, like memory efficient continual learning from Amazon Research.

That adds new tasks, that updates the model without scaling memory requirements as the perimeter increase for each new adapter task. So that's really exciting. We're gonna need more breakthroughs. Continual learning is gonna be the real answer. We need to make these models lifelong learners. And if you look at the philanthropic deck for their big, 300 million raise or whatever they're looking at they basically think that, within a few years most of the gigantic foundation models are going to be.

And there's not gonna be any chance to catch up. And that essentially we're gonna, we're just continually train those models over time and make them smarter. I don't know if I fully agree, but it isn't, but it does. But there are going to be big moats created by these models, especially as they crack continual learning.

And if I could just e keep adding tasks and new data to these systems and they get smarter and smarter over time without catastrophic forgetting than I have a significant mode. And that's gonna be very interesting. That gets us a lot. To the LTM concept that we talked about in the early part of the presentation.

So look, this is the end. We're entering this age of industrialized ai, and AI is out of the labs moving into software applications. When engineers get their hands on it, they think about it in a different way, and that's exciting. When I saw stuff in the stable diffusion community where they just started jamming together, blending lots of models, many people thought that wouldn't work.

I saw one researcher jammed together 200 models, and most people thought the models would collapse, and they didn't. It made a better. So that's exciting. You start to see these techniques. The Laura, for instance, was adapted from LLMs to diffusion models to the point that the author of the paper was then on the subreddit talking about, I never even thought it could be used for this.

So can, how can I adapt the next version of that to fix these kinds of things? So that's the kind of thing that engineers do. They take things from a lot of different places. They jammed them together. They learn from the past, they think differently. And so we're gonna see a lot of the mitigations for these problems.

A lot of people are worried we're not gonna come up with the answer to these. That's ridiculous. We are. That's what engineers do. That's what engineering is all about. It's fixing problems in the real world. So we're already seeing the seeds of elegant solutions to the most well-known problems of elms.

And they're gonna come fast and furious over the next few months and years to make these trusted for enterprise environments and to make them more explainable, to make them more controllable, to make them better understood, to make them stay on the rails more often, we're gonna have smarter, more capable, more grounded, more factual.

That are safer and more steerable. And it's not just gonna be one company doing, there's gonna be these techniques applied to all the kind of models and the techniques that are out there. And they're gonna be rapidly adapted back upstream to the foundation models. So anything your teams can do to push this forward.

We don't need another company wrapping something around G P T. There's already a million of those. That's cool if you wanna do that. But if you really wanna push forward to the kind of age of industrialized. You gotta get in there in the middle. You gotta get in there in the fixing of these things.

You gotta get in there, the engineering side of the house. And that's the thing that ends up powering us. To the ubiquitous, ambient, intelligent age, much faster. That's it. Dude, that is so awesome, man. Ah, God. I love every time I talk to you, so I got so questions.

+ Read More
Sign in or Join the community

Create an account

Change email
e.g. https://www.linkedin.com/in/xxx
I agree to MLOps Community’s Code of Conduct and Privacy Policy.

Watch More

56:55
Posted Aug 07, 2023 | Views 612
# Generative AI
# LLM
# Scale Venture Partners
45:11
Posted Jul 31, 2023 | Views 468
# Experiment Tracking
# Prompts
# Neptune Ai