MLOps Community
+00:00 GMT
Sign in or Join the community to continue

Navigating Machine Learning Careers: Insights from Meta to Consulting

Posted Jan 27, 2025 | Views 154
# Meta
# Consulting
# Instructed Machines, LLC
Share
speakers
avatar
Ilya Reznik
ML Engineering Thought Leader @ Instructed Machines

Ilya has navigated a diverse career path since 2011, transitioning from physicist to software engineer, data scientist, ML engineer, and now content creator. He is passionate about helping ML engineers advance their careers and making AI more impactful and beneficial for society.

Previously, Ilya was a technical lead at Meta, where he contributed to 12% of the company’s revenue and managed approximately 30 production ML models. He also worked at Twitter, overseeing offline model evaluation, and at Adobe, where his team was responsible for all intelligent services within Adobe Analytics.

Based in Salt Lake City, Ilya enjoys the outdoors, tinkering with Arduino electronics, and, most importantly, spending time with his family.

+ Read More
avatar
Demetrios Brinkmann
Chief Happiness Engineer @ MLOps Community

At the moment Demetrios is immersing himself in Machine Learning by interviewing experts from around the world in the weekly MLOps.community meetups. Demetrios is constantly learning and engaging in new activities to get uncomfortable and learn from his mistakes. He tries to bring creativity into every aspect of his life, whether that be analyzing the best paths forward, overcoming obstacles, or building lego houses with his daughter.

+ Read More
SUMMARY

Ilya Reznik's insights into machine learning and career development within the field. With over 13 years of experience at leading tech companies such as Meta, Adobe, and Twitter, Ilya emphasizes the limitations of traditional model fine-tuning methods. He advocates for alternatives like prompt engineering and knowledge retrieval, highlighting their potential to enhance AI performance without the drawbacks associated with fine-tuning.

Ilya's recent discussions at the NeurIPS conference reflect a shift towards practical applications of Transformer models and innovative strategies like curriculum learning. Additionally, he shares valuable perspectives on navigating career progression in tech, offering guidance for aspiring ML engineers aiming for senior roles. His narrative serves as a blend of technical expertise and practical career advice, making it a significant resource for professionals in the AI domain.

+ Read More
TRANSCRIPT

Ilya Reznik [00:00:01]: So yeah. And Ilya Reznik. I am the instructed machines janitor and the CEO at the same time and I actually don't drink coffee.

Demetrios [00:00:12]: Welcome back to the one and only mlops community podcast, folks. Today I feel honored because we get to talk with Ilya and there's not many times that you can chat with a staff level machine learning engineer from a company like Facebook or Meta as it's called these days. Today is that day. Ilya recently resigned, I guess you could say, or quit. The way that he likes to say it. He was more blunt about it and he did some consulting afterwards and now he's spending his time helping others recognize the best way to become a staff machine learning engineer. So we talked a little bit about that. We talked about his recent trip to Neurips and everything in between.

Demetrios [00:01:11]: We were talking about why fine tuning is not all it's cracked up to be. And you've said that the TLDR is don't do it.

Ilya Reznik [00:01:22]: Yeah, fine tuning is a great technique and there's a lot you can get out of it. Especially like, you know, in the LLM world if you're trying to get a particular shape of an output, it may be worth it, right? Like if what you're really after is JSON, like it may be worth fine tuning the model to that though modern models like understand JSON a little bit better. But, but the problem is like every time you do it, it's a lot of effort and the first, the first model that come off the press is usually worse than the original one. And the whole idea of fine tuning is you're taking this huge LLM or whatever other model that can perform really well on many different tasks and you're trying to get it to perform better on your task at the expense of performing worse on all the other tasks. And that works sometimes. But if you can prompt engineer your way out of it, you should do that first.

Demetrios [00:02:25]: Yeah. The other piece on it, which is hilarious to me, is that you fine tune it and it's a bit of a crapshoot because you don't know if what comes out the other side is going to be better or worse until it comes out. So you spend all this money and time and energy on fine tuning just to recognize that ah, it's actually not that good.

Ilya Reznik [00:02:48]: Yeah, yeah, you gotta be very careful with the data you fine tune on as well. Especially like you can fine tune a large model on a few thousand examples, but you gotta be pretty careful about what those couple a few thousand examples are. People usually Put what they have rather than like think through, like, okay, like, how imbalanced are my data? Like, what's, what's happening here? So, yeah, fine tuning is, is a great technique. I liken it to training and embedding in the first place. You know how that's really more magic than anything else because I've trained so many embeddings where you just pulling your hair out, you can see it's all gone now. And I used to look like you before I started training embedding models. And, and then like at the end you're like, oh, my objective function is wrong. I shouldn't be training for this, I should be training for something else.

Ilya Reznik [00:03:40]: And you switch that and everything just works magically the first time, you know, the first time after months and months of work.

Demetrios [00:03:46]: Yeah. So, yeah, yeah. And you learn that the hard way. It's not like you can read a blog post and be like, oh, this is exactly what's wrong with mine. You have to just kind of trial and error it.

Ilya Reznik [00:03:57]: Yeah, well. And in fact, there is no blog post. Right. Like, in fact, all of these problems are one offs. And just because it worked this time doesn't mean it's going to work on the next one. And so there's, there are best practices. So you start there, but at the end there's some wudu. You got to make sure nobody's poking your doll with a needle.

Demetrios [00:04:18]: I mean, I've heard a lot of this fine tune for form. And one thing that I can't really get my creativity around is what are the other reasons you would fine tune for form if it's not JSON output?

Ilya Reznik [00:04:34]: Yeah, well, I mean, there's other structured output, right, Like HTML or markdown or whatever, but, but also, like sometimes it's not just form, sometimes it's if you're in an area where the language is really particular. So like expert language is very different. I worked with a medical company and we had to do that. Like, that's another place, right? Because the distribution of tokens is really different when basically every word that you say is Latin with an occasional English in the middle. So those kinds of scenarios where your output is quite different, people try to do that for knowledge. So if there's a very particular set of facts. But the problem is LLMs don't have knowledge. They don't store facts, they store probabilities.

Ilya Reznik [00:05:28]: And so like, in that case, I don't think it's useful at all to fine tune. I haven't found it useful yet.

Demetrios [00:05:35]: Yeah, that's the where rags will get you a lot further.

Ilya Reznik [00:05:41]: Yeah. And even RAG is no panacea. Like what rag does is it puts the fact a lot closer in context. But it doesn't mean that it depends on how strong the prior was for the LLM. Right. And if the prior was really strong, it still won't pick it up. So what people call hallucinations are not hallucinations. They're totally like, they're an artifact of the way the model works.

Ilya Reznik [00:06:07]: And so the term hallucination assumes that like oh, you're just like out in left field. You're not. This is a reasonable prediction based on the distribution of training data. And, and so I think, I think LLMs are extremely useful. I think they will be with us for a long time. I do not think that they're going to be the only technique going forward.

Demetrios [00:06:27]: And what do you mean by that?

Ilya Reznik [00:06:29]: I think there will be so like on the non ML side. Right. Like you have databases, but nobody is like oh you know, like I don't know, MongoDB solves this problem 100%. No, like Mongo is like part of the solution that's there that we rely on and it's useful. Well, it depends on who you ask. Some of my friends are like don't ever use mongo. But, but you know, you need to build a solution around it. And LLM is similar.

Ilya Reznik [00:06:57]: Like you need to build some knowledge retrieval around it. Is it rag? I don't know, maybe it's graph. You know, you. And there's a lot of talk. I know you want to talk about this, so I don't know if I should be moving there yet, but like I was just at New last week, right? Yep. And there's a lot of talk about how we ran out of data and what we ought to do is reinforcement learning. And people are like wait a minute, RLHF and all the reinforcement experts are like hahaha. That's not reinforcement learning.

Ilya Reznik [00:07:27]: Not at all. Reinforcement learning is one of those cool things that's always like the next big thing, but never has been the big thing, you know? Yeah.

Demetrios [00:07:35]: Like agents.

Ilya Reznik [00:07:36]: So. So I think there will be some reinforcement learning in that system as well. There will be some RAG in that system as well. There will be some other techniques around the LLM that make it actually useful.

Demetrios [00:07:52]: It's interesting to think about like you're talking about, okay, databases are there, but then you have different flavors of databases, whether it's your Mongos or your cockroach or Oracle or whatever. And and you as the architect of the system get to decide what your trade offs are and what you're really trying to optimize for. And so you're going to have. Maybe it's an LLM is the center of that or maybe it's not. And maybe it's a graph rag, maybe it is a fine tuned LLM or a lot of fine tuned small language models or something like that that we will see. And it's cool to think about that as you get the flexibility to choose. One thing that I do feel like though is that you have some standard patterns that are coming up and if you look at it like trails in the woods or trails to the top of the mountain, you definitely see there are standard design patterns that folks are kind of taking and one of them is not the fine tuning route. And so I think that that's pretty clear as to.

Demetrios [00:09:00]: All right, if we're going to fine tune, we better have a really good reason and we better really need to do it because everything else up until this point has failed us. So it's like fine tuning is our last option.

Ilya Reznik [00:09:13]: Yeah, fine tuning is expensive and it's, it gives, even when you're super successful, it gives you a limited benefit. And so like sometimes that's what you need. I, I absolutely agree with you. It's all trade offs. Right? Like sometimes that last like 2, 3% is exactly what you needed. But it's not, you know, if the model is really bad at math, fine tuning is not magically going to make it better. But if the model is like, oh, you know, you keep giving me really informal things when I ask you for a formal letter or, you know, what I really care about is latex output. And what you keep giving me is markdown.

Ilya Reznik [00:09:53]: Like yeah, that's, that's a great place for it. And you could probably get that kind of improvement from fine tuning. But, but it's not end all to be all about the best practices. I do want to caution that you are on shifting sand a little bit. We're all on shifting sand a little bit. So like best practices that emerge today in two years we might be like, can you believe we used to do that?

Demetrios [00:10:19]: That's such a great point. I mean even just with rag, how now people are like, yeah, what we did a year ago is called naive rag because we were naive back then. Now there's this advanced rag or you've got to do agentic rag and even the, what is it, the react agent architecture that's now not even seen as really useful because there's much better ways to do it. So it's true that this is going to be shifting very quickly. It's almost like month by month you're seeing new and better ways to do things and maybe they seem new. You get a lot of energy and attention around them and it feels like, wow, this is much better. And then after the community has played with it a bit, they recognize that it's not actually that valuable. So you are.

Demetrios [00:11:08]: Yeah, shifting sand is a great way of, of putting that because you are not super clear on what is signal and what is noise.

Ilya Reznik [00:11:17]: Yeah, yeah. And you know, it's evolving. There's a lot of money going into this. Obviously there's a lot of benefit to getting this right, but it is, it's not a kind of thing where you can learn everything now and then spend the next 20 year career on what you've learned. It's the kind of thing where you learn stuff now and then you learn stuff for the next 20 years.

Demetrios [00:11:40]: Yeah, totally. Well, speaking of which, I mean that's what the conference like you were at last week. Neurips is why it's so cool, right? Because it brings all of the newest stuff out into the open. What were some of your favorite takeaways? Besides not being able to order an Uber, like you mentioned before we hit.

Ilya Reznik [00:12:02]: Record, I actually use buses most of all more than Ubers there because Vancouver is a very, uh, good public transportation city. I don't know if people from there would say that, but where I stayed it was definitely really useful. Um, but yeah, I mean it, it's. Neurips is terrific. Neurips is the flagship conference in ML, has been for decades. Uh, the first Neurips came out the year that I was born. So it's been around for a long time. Uh, and it started out as a very academic conference between neurobiologists and machine learning people when they were like, hey, we can learn stuff from each other.

Ilya Reznik [00:12:39]: And the main takeaway from this really is that I didn't see that many truly academic papers and posters and presentations like they're majority applied right now. And there's a huge push into like, okay, we have Transformer models, how do we use them? How do we actually make them useful? Which is not usually the focus of NeurIPS. The focus of Neurips is usually, let's look for a new architecture, let's go off. And so there's definitely convergence on Transformers. And you see this from different industries. When I was coming up in this, it was like if you did computer vision, you didn't do language. I was the weird one out where I was like, no, I can do either. But now it's like everybody's working on the same model.

Ilya Reznik [00:13:26]: So one invited speaker put it really well. They were like, we used to have a lot of models, and then we kind of converged toward the transformer architecture, but now we got to diverge back into the applications. And how does this apply to everything? So that's probably my main takeaway is that there's a lot of thinking that's happening about the system that we talked about. Like, what does that have to have and how does that work? You know, like, and we are way more advanced than we used to be a few years back. We're talking about, like, left to right LLMs versus right to left LLMs, and, like, they do really well in different kinds of things. Right? Like, the left to right is English. There are, you know, there are languages that are read the other way. So.

Ilya Reznik [00:14:10]: Yes, but. But in English, the left to right does really well at language. The right to left actually does math a lot better. We're talking about not every token is as important as others. So some tokens are fuzzy and maybe you should be treating them as such. We're talking about reinforcement learning. There's a lot of excitement about using reinforcement learning as we're running out of data. Um, or it feels like we're running out of data.

Ilya Reznik [00:14:34]: I'm not even sure I buy that argument that we're running out of data. There's, you know, Ilya Sutkever talked about that, and I'm like. His argument was that we only have one Internet. And I was like, dude, we're uploading a new Internet, like, every two weeks. Like, this podcast did not exist. Right, exactly. So, like, now it does. But.

Ilya Reznik [00:14:56]: But the. There are rumblings on the side and the hallway track. This didn't make it into the main conference, but there are some rumblings about curriculum learning, which is a thing we kind of gave up on entirely. And what is that? Yeah, so. So first of all, the way that we learned today, it's like if, you know, when you were little, your mother came to you and said, Here's 80% of everything I know shuffled randomly. Now I'm going to test you on the other 20%. Right? Like, that's just not what happens with kids. What we do is we tell them, here's a sound, here's a word, here's a sentence, and they progress through that in a supervised manner, like a Teacher tells you this is the first thing you need to learn, this is the second thing.

Ilya Reznik [00:15:42]: But that's not what we do with models. What we do with models is like shovel all the soup and you go figure it out. And so curriculum learning is about how do we create that step by step, where like a model can learn something basic first and then start building on top of that. And it involves a lot of data curation. It's very like time expensive, but it might be worth it. It might get us better outcomes. And so like we're starting to talk about that again. That hasn't been a conversation for the last like 20 years, honestly.

Ilya Reznik [00:16:17]: But it's coming up again and just.

Demetrios [00:16:20]: Because it's more structured, it will be more efficient. Is that the idea?

Ilya Reznik [00:16:25]: Well, the idea is that it respects the concepts in the real world. It respects that addition and subtraction are prerequisite to multiplication. And so the idea is that you can be quite a bit more data efficient and you can also start really understanding concepts and symbols that represent the real world. And then you can start avoiding things like hallucinations because you're not reliant entirely on statistics. Those statistics came step by step. And you know that the early steps are fundamental. It's problematic though, because it's a lot harder. But.

Ilya Reznik [00:17:08]: But also, you know, Neurips was always about being inspired by biology and this is very much inspired by biology. So yeah, it's an interesting concept. I don't know if it'll ever gain traction. Lots of things we talk about in Europe never really leave, but. But that was one that struck me. The other thing for me, with my background, so when I was at Twitter, I owned model evaluation, offline model evaluation for all of Twitter. And the. The interesting thing is there's been a lot of talks on evals and how bad they are.

Demetrios [00:17:42]: And even at Neurips, there were still talks.

Ilya Reznik [00:17:45]: Even at Neurips, I feel like the.

Demetrios [00:17:47]: Only talks I saw this year were rag talks and eval talks. And it was kind of, if you boil them down, it was like, rag is difficult and you gotta watch out for where it fails. And here's all the way this ways that it fails. And maybe here's some tricks you can do. And then eval talks were like, evals suck. We don't have any good. Like these leaderboards are steering you astray. It's all marketing.

Demetrios [00:18:10]: And don't listen to anybody who says that they have a soda model.

Ilya Reznik [00:18:13]: Yeah, yeah. So there was a talk that I went away with like mind blown and it was about something else. But this is the line that like blew my mind. And the speaker was like, yeah, like we have this benchmark and we got 95% on it, you know, or 90 something, I don't know, some high number. And the model can't do anything real in the real world. And I'm like, how, like what is the disconnect that you can get 95% on your benchmark and still not be able to perform any useful work at like clearly your benchmark is not. What do you think it is.

Demetrios [00:18:43]: Yeah, it's not valuable.

Ilya Reznik [00:18:45]: And, and then like you add. So this is like evolves are a mess early on. And we talked about like LLM as a judge a few years ago. Like I was fortunate enough to work with awesome people at arise on like explaining this and putting that concept together. But, but that's not even the panacea. And, and then you go into the, the agentic world and, and then you go into like multi agent world. And I don't even like how do you evaluate a multi agent system? Like the amount of complexity and, and you know, like some of this is understandable, right? I think Elian, his talk talked about how as the intelligence grows, it's understandable that you will have a harder and harder time evaluating it, which is cool. But my job is to make sure I can evaluate it.

Ilya Reznik [00:19:39]: And so I don't know how to do that honestly. There's, there are a lot of ideas, there are a lot of concepts, but I don't know that we've converged yet. There was a talk by D. Scully, the CEO of Kaggle.

Demetrios [00:19:52]: Love that guy.

Ilya Reznik [00:19:53]: I love.

Demetrios [00:19:54]: And for anybody that doesn't know, he also wrote the most incredible paper when it comes to MLOps in like 2016, which is the high end credit card debt of machine learning, which everyone's probably seen it if you've ever seen a talk on MLOps because it has that diagram of all these boxes of things you need to do to put machine learning into production. And the model box is just a small box out of all the other stuff that you gotta think about when you're productionizing machine learning. But anyway, I digress. Keep going.

Ilya Reznik [00:20:30]: No, that's, that's terrific. Yeah, yeah. D is amazing and I, I think the world of him. But he, his talk basically made me a paranoia. Right? Like I was already a paranoia but, but he was like at Kaggle, their main problem is that people find signals that aren't actual signals. They're like they leaked into the data somehow Somebody put all the positive examples in one folder and all the negative examples in another folder. And your model is not predicting anything real, it's just predicting what folder it was in. Right.

Ilya Reznik [00:21:03]: Which is easy because you have the path. And his point was that researchers should look at it that same way too. Not that you're intentionally going to hack your metrics, but you're unintentionally going to hack yourself. And so he talked about, okay, so what can we even do? The problem is when your data is Internet scale, everything is already in there, right? Like everything that ever hit the Internet is leaked. So you're, you know, you can't hold out that, that data anymore. And so he talked about new benchmarks that are based around some sort of a separation, either a physical separation because it's not on the Internet yet. So he talked about for like some math benchmark, having mathematicians, like top mathematicians in the world, go into the woods, sit in the room by themselves, come up with problems that haven't been ever with us, and then come back and evaluate models on those. And then he talked about like time separation.

Ilya Reznik [00:22:04]: Right, is another one. So if you're trying to predict, I don't know, the stock market or something, you can't fully evaluate your model until six months later. And unfortunately that's not how it works in our stock market predictions. All of those models are like, they're going to be irrelevant tomorrow. So we need to retrain. But, but his point is that we need to be a little bit more patient and spend a little bit more time generating data for benchmarks. Because even like, you know, we do our best, but all the benchmarks for LLMs, like, all the data for them is leaked. Like all the data for them is on the Internet.

Ilya Reznik [00:22:41]: And so this is how you get to a scenario where you're 95% accurate on your benchmark and worthless. And, and worthless.

Demetrios [00:22:49]: Yeah. That's why I appreciate these leaderboards or these benchmarks that are like pro LLM. They, that's from the process team. They put it together and all of the actual data that is created for these benchmarks is all closed source. So theoretically the models haven't seen them. I can imagine that is only good for, for the first couple go arounds and then the models have seen them. So the process team, I think is constantly updating that data and it's constantly like creating new ones. And so they have different types, you know, they've got, I know they have one which is from Stack Overflow and that is public data.

Demetrios [00:23:34]: So I can imagine that most of the models have been trained on that in some way, shape or form. Which is funny because a lot of these models still do really bad on it. But take that what you will, or take what you will with that. I don't, I don't really know how to understand it well.

Ilya Reznik [00:23:53]: And just to be clear, I know lots of folks who train large language models that we use every day. And I think all of them are like, really decent and they try really hard to avoid data that they know they're going to be tested on. But it's like, it's not an easy thing to miss because you can't look at every data point that goes into this. And when you're just like, just give me the Internet, like, yeah, how do you want to change? I don't know. Then there's a lot of stuff that leaks in. And so like, I don't think it's an intentional, you know, hackery. I think it just happens by virtue of the way that we train these.

Demetrios [00:24:32]: You're going to love this. I saw some kind of a nonprofit that was. Their whole thing was let's make the Internet mediocre again so that these, this AI is not better than us at doing things. And we don't ex. Get like unreasonable expectations put on us because AI can help us do it. And so what they've been doing is going around and buying up domains in the common crawl. And instead of doing like nefarious things like I've heard, heard of other people trying to do, where they will put different poisonous data on these common crawl websites, they're just putting a bunch of really mediocre content that isn't good at all on all of this common crawl. So it's letting the.

Demetrios [00:25:20]: Or what they're trying to do is flood the LLM training data. But at the end of the day, you have to buy. You have to have so many of these and put so much content out there to even make like a little drop in the ocean. Right.

Ilya Reznik [00:25:34]: I think the Internet does a pretty good job of making itself mediocre without extra help. Without those guys.

Demetrios [00:25:40]: That's, that's a great point. Yeah. I mean, so the, the funny thing there is then, I know you've spent a lot of time training models and thinking about that. How do you go about like that data issue and the data quality issue?

Ilya Reznik [00:25:58]: I mean, yeah, it's hard. And, and I'm not sure that I have a silver bullet. Right. Like, oftentimes a lot of the things that I've trained are pre LLM. Like, I, I still, I worked in this industry 10 years before LLMs ever hit. So, so most of my experience is still pre LLM. And, and even like since LLMs came out, you know, that was training Meta's ad prediction model and that's not, not trained on the same data. You can be a lot more careful there, but you kind of, I don't know, I don't know that there are great ways to do this other than you do it.

Ilya Reznik [00:26:35]: And then you look at your evaluations and you trust them, which we just talked about how you can't. And they guide you into the segments where you're maybe underperforming and then you look at what's, what's happening with those segments and then you train again. Right. Like the, the beauty and the downside kind of of working in ML is that you can always try again and you should always go in with an idea of like, I'm probably not going to think through every single contingency, I'm probably not going to think through every single data issue. When you gain enough experience, you've been burnt by enough things to where you start checking them. But, but like, even for me, like 10 years in, like, I don't think that checklist is all that, you know, inclusive of everything that could happen. And, and so it's an iterative process and it's a, it's a process that you learn from and you do a little bit better next time. Hopefully the person next to you has burned, has been burnt on a couple of different things.

Ilya Reznik [00:27:35]: It really does help to work with different people. And, you know, you gain a lot of knowledge that you don't pay for with bringing down production, which I have done, I've done that. But battle scars, the battle scars. And you know, like, that's good knowledge. It's even better knowledge when somebody next to you is like, hey, I've brought down production this way before, let's not do it again. And so there's a lot of iteration. Back when I started in the industry, we used to do a lot of feature engineering, but the datasets were a lot smaller. And so on a small data set, you can start understanding it really well.

Ilya Reznik [00:28:13]: But how do you even go about understanding language data? You're going to have an embedding and you're going to visualize it, but guess what? How are you going to train that model? You're going to train an embedding. And so when your tool for understanding the data and the tool that you're training are one and the same essentially. Uh, like you just try it. You just try it and see what comes out of it. And I know that, that that's pretty wasteful of the compute. I know that there are environmental consequences. I don't tend to think of them when I train. I probably should.

Ilya Reznik [00:28:47]: But you, you know, you're trying to be compute efficient just because it's expensive, not because it's environmentally friendly, but, but even so, you kind of go into it with an expectation that you're not going to be right the first time you train it. You're going to need to do some work on it. And ideally you know, you know all of this. But you start with a small model. You start with overfitting on like a small data set to see like even like directionally how you write. You do early stopping to make sure that like you're not training longer than you absolutely have to. But even with all those hacks, like the way that we do machine learning today is just a lot of data and a lot of compute and you do some thinking, but the thinking is fairly limited compared to amount of data in compute.

Demetrios [00:29:37]: So you overestimate me when you say things like you know all this already.

Ilya Reznik [00:29:43]: Yeah, well, so what you do is you just nod and then you look way smarter than. Yeah, yeah, I know that type.

Demetrios [00:29:50]: I just doubted myself. I could have kept that lie going.

Ilya Reznik [00:29:54]: Exactly.

Demetrios [00:29:55]: What am I thinking?

Ilya Reznik [00:29:56]: That's what I do. This is how I look. Like I've done a lot of different things. It's like, oh, like you've, I don't know, you've trained multimodal modal models. By the way, whoever named that like, please, like when you're talking about it, especially to your vp, you sound like an idiot. Multimodal models.

Demetrios [00:30:17]: I there the naming conventions in AI, I tend to enjoy like hallucinations. Or my favorite is catastrophic forgetting. That just sounds like, wow, that is one of the worst things that could ever happen to somebody. Catastrophic forgetting. And then you're like, oh, that's just when a model forgets.

Ilya Reznik [00:30:40]: Yeah.

Demetrios [00:30:40]: And it doesn't ever remember it. When you fine tune it and you're like, geez, no need to be so dramatic about it.

Ilya Reznik [00:30:48]: Yeah, gotta raise money. Right. Like that. That's what all of those namings are there for, like hallucinations.

Demetrios [00:30:55]: Right.

Ilya Reznik [00:30:55]: Like it makes it sound like it's a human. So it's like, oh, we're almost there. But AGI. So I've been in the industry for 10 years. AGI has been 20 years away. And, like, people are like, now starting to be like, oh, maybe it's six years. It's still 20 years away.

Demetrios [00:31:10]: Yeah, exactly. I'm not super worried about that one, but maybe I'll be blindsided with it, you know, coming out of nowhere and everybody's like, I told you it was coming. But I, I don't think it's really coming. I think the. My favorite thing that I saw on this was in the State of AI report when Nathan wrote all these companies that were so keen on AGI and regulation around AI, because AGI was right around the corner. They've sobered up and now they're trying to create enterprise tiers to go out and make money. So you're like, okay, so you changed your tune a little bit there, huh? AGI, we don't have to worry about that.

Ilya Reznik [00:31:51]: It's a ton of free marketing. I mean, I'm, I'm a skeptic here, right? But, like, it's a ton of free marketing. If you can come out and say, robots is going to take. Are going to take your job and my company is doing it. It's like, if you look at, like, you know, other voices in this, like Gary Marcus, who no longer has a company that does this, it's like, no, guys, like, you're, you're. I don't know what you're all talking about.

Demetrios [00:32:14]: Yeah, you are way up.

Ilya Reznik [00:32:16]: But if you profit from it, it's. It's different. And so, yeah, I actually, I don't love the Silicon Valley culture of, like, go sell the AGI when you can, like, tie your shoelaces. Like, I think it's more hurtful to us than it is useful. And. And there is also a fear. I do think we will reach really high levels of intelligence eventually. Levels of intelligence that I'm worried about.

Ilya Reznik [00:32:46]: I don't think it'll happen in my lifetime, but I think we will reach there. And in some ways, like, how do you define intelligence? Right? Like, maybe these models, like, I can't tell you every Roman emperor, but ChatGPT can in, like a minute, under a minute. But I worry that when we get there and when people like me start silencing the alarm, it's going to be like, you guys were telling us this for, like a century. Like, why should I care now? Like, it's never come to fruition. The boy who pride will know this time is very different. And it's like you said, this time is different. The last time I remember, in 2020, you said, this time is different. And so at some point we do have to worry about it, but, you know, maybe we'll just poison the Internet by then.

Demetrios [00:33:28]: Yeah. Remember all those congressional hearings we had and nothing came out of it and now you think it's different what's going on here? So. Man, that is. That's a very good point. You don't want to continuously be ringing the alarm bells because then it's the boy who cried wolf.

Ilya Reznik [00:33:45]: Yeah.

Demetrios [00:33:46]: And the boy who cried aji the porridge. Yeah, the. Oh, man, I had a joke that I can't think of quick enough, but whatever the.

Ilya Reznik [00:33:59]: Can you edit it? I think you can edit it after the fact.

Demetrios [00:34:01]: I can edit it, yeah. So I can give a big pause, formulate the joke in my mind and.

Ilya Reznik [00:34:06]: Then I can give you some video of me just sitting here like.

Demetrios [00:34:09]: Yeah. And then last B roll in between again. We are artificially making me seem like I am much smarter than I am on this podcast. Podcast. That's why I brought you on here, really. You have great ideas, you have great experience, but it's to help me look good. Really.

Ilya Reznik [00:34:29]: So that's why people bring me on their. Their podcast, is to look good by comparison. Right. Like, just next to Ilya, I look like a genius.

Demetrios [00:34:39]: Yeah. Well, we should talk about a lot of this stuff you're doing around ML careers and for or people that are in the AI world. I think that you're doing a huge service to people because of the experience that you've had. I know that when we first talked probably back in 2020, you were still at Twitter. Since then you've gone on to work at Meta and then the health tech company. But the interesting thing, when you were at Twitter, I think you just got the job and you were like, yeah, I'm thinking about trying to take that next step into becoming a staff machine learning engineer. And for me, I thought that was fascinating just because it was like, oh, okay, and so what do you need to do there? And you're like, well, I think I need to give more talks and be out in the community more. Turns out that wasn't true.

Ilya Reznik [00:35:34]: That.

Demetrios [00:35:35]: Or was it directionally correct? Because first of all, you didn't give any talks. I kept bugging you to give a talk in the community and then you were like, yeah, yeah, later, later, later. And you ended up getting staff engineer anyway. So what happened?

Ilya Reznik [00:35:47]: Well, now I can give talks as a staff engineer. No, I, I think, I think slightly different companies do it differently. Right. And so Twitter was particularly quite. They put a lot of emphasis on like open sourcing stuff and publishing within Twitter, unfortunately, you know, I'm sorry, it might be my fault Twitter died while I was there. I think it has something to do with change in ownership, but.

Demetrios [00:36:17]: No, but you were there when Elon took over.

Ilya Reznik [00:36:19]: No, so I left a little bit before I was there for the entire drama and I left like a month before because it was pretty clear where it was going. So, yeah, Twitter was a terrific place to work, honestly, and I don't think it is today. But different companies price different things, right? And so there's more than one way to staff. And at staff you really start seeing the archetypes and different people get staff for different reasons. Some people get to staff because they're really broad. Like you always kind of have to be T shaped rather than broad. You have to be broad but really deep in something. But some people like me get the staff because we can understand a lot of different things across the ML model life cycle.

Ilya Reznik [00:37:13]: And other people get there because they're an ultra specialist on text embeddings after it rains in, you know, in some.

Demetrios [00:37:24]: Sort of Venn diagram, it's like there's a lot of different pieces that they're just in the center of.

Ilya Reznik [00:37:29]: And you're like, wow, yeah. And staff engineers are leaders. And so a lot of what you need to demonstrate is that you can produce at a high level over time. And so people get frustrated. They're like, oh, I've done this for a year, when am I going to get staff? And I'm like, you know, your time horizon on staff is like three years for anything you plan. And so just because you could perform at that level for a year, like, I'm not quite, quite convinced yet sometimes. Wow. So.

Ilya Reznik [00:37:59]: So some of it was just timing, right, for me that I, I still needed to wait. And honestly, like, I didn't get staff at Twitter, I got staff when I went over to Meta and part of it was, you know, my interview performance and part of it was all the experiences that I've had before. But I know that like Google, for example, does prize talks and being visible in the community in some ways for their staff and especially senior staff. Everybody I know who's senior staff at Google has done at least some conferences, said at least something that's worthwhile. And so, yeah, I mean, I think there's. The problem with this is that there's more than one path. And in ML in particular, I think there are a lot of folks who do a really good job, like in ML ops community talking about ML ops, they're, there's a ton of good material for software engineers on how to like advance through your career, but there's not a ton of material for mles. And what material there is usually comes from really well meaning folks, but like folks who haven't seen the higher ends of the ladder and it changes by the time you're here.

Ilya Reznik [00:39:14]: But one of the things I'm working on is next year and I, I want to invite a lot of folks who are more senior than me and have a podcast with them about specifically about their career. How do they, how did they get to where they are and kind of ask them really targeted questions. Like, I'm trying to get a former director for META that I worked with to, to come and talk about, like, what's it like to be a director? Like what, yeah, what are the challenges that happen there? The you don't see until you get to that level. Right. And yeah, and so like a lot of folks who are a lot smarter about it than me and to try to understand that area a lot better. In the meantime, I do have a small YouTube channel which right now is about helping you get the next ML job. I really wanted it to be broader and so I did like a video on mentorship and it tanked. Like, nobody watched it.

Ilya Reznik [00:40:11]: Like to this day it has like 200 watches. And like my other videos have like 14,000. Right. And, and what I understood was that nobody is talking about ML career path at this level. And so people don't think about it until they have to look for a job.

Demetrios [00:40:27]: Yeah.

Ilya Reznik [00:40:28]: And then they have to look for a job and they're like, where do I go? And so, and so, and so I started the channel with a little bit more content, with content a little bit more heavily leaned toward, like, how do you interview, how do you get next job? And it's kind of in my mind understood that at some point it'll be a bait and switch where I'm going to say, okay, now that you've got the job, here's how you actually do it and here's how you get promoted and here's how you make sure you don't ruin the world while you're doing this. So, so that's kind of the direction of it. But hopefully it'll be gradual enough to where people and won't unsubscribe.

Demetrios [00:41:10]: Yeah, no, they'll go with you on the journey because they probably are in the job and going, oh, now I gotta find a new YouTuber to watch videos of who Will tell me how to do this. I might as well just listen to you.

Ilya Reznik [00:41:21]: Just keep watching me. You don't need a new YouTuber. But there are things about our profession that are different. For example, like the shifting sands that we talked about, right? So my master's thesis was about computer vision, and it was in 20. So, like, I started it in 2011. I finished it in 2013, and it was like, hark Cascades, right? Because nobody has heard of a CNN by then. And by the time, like, I got it out in 2013, it was like, dude, like, you should have done it with CNNs. And I'm like, I know.

Ilya Reznik [00:41:53]: I'm. I'm not. I'm not going to do another two years, though. Like, I'm done. But. But I know I should have done it with CNNs. And so it's the only field I know of where, like, your graduate thesis can be irrelevant by the time you're done writing it. And.

Ilya Reznik [00:42:09]: And, like, the new kid graduating tomorrow probably knows more about transformer models than I do, having 10 years of experience in the field, because I have to maintain the models that are there today. And they had, like, all the time in the world to go learn about the latest and greatest starting from scratch. And so how do you. How do you even have a career in that, right? Like, how do you stay multiple decades doing that kind of thing where Transformers weren't a thing before 2017? Like, nobody. Nobody really understood that this was useful. Like, a couple of people did, but they were crazy. And. And it wasn't until you could bring a lot of scale to this that you even see this.

Ilya Reznik [00:42:51]: Because I saw early papers on attention mechanism, and I was like, yeah, like, it's interesting. Like, we have LSTMs. Like, I'm. I'm not sure how this is better than an lstm. I literally said that. So please don't take, you know, future advice from me. But I also sold 500 bitcoins at $1 each. No.

Ilya Reznik [00:43:12]: Yeah.

Demetrios [00:43:12]: Oh, man. Well, I'm glad it still worked out for you. Yeah, you've had a pretty successful career. You didn't need those.

Ilya Reznik [00:43:22]: I did.

Demetrios [00:43:25]: Stupid taxes, they call it. You know, we all got to do that kind of stuff, and you're just continuing to kill it. So I love seeing that. And I love, like, what you're talking about is very few people in the world actually talk about. And I think you. You mentioned it to me last time we chatted, probably where you were. You were saying, the folks that are in these positions, they're so busy doing their jobs, they usually can't talk about what it means to be in these positions and how you get to these positions. Like you said, everybody's journey is different into them.

Demetrios [00:44:04]: And then the other thing is, nine times out of ten, when you're staff mle, you're at a large company and you've probably got some or a lot of NDAs that you sign. So it's not like you can just go out there and preach from the rooftops.

Ilya Reznik [00:44:20]: Yeah, yeah. And I think, I don't know if it's intentional gatekeeping, but there is a little bit of gatekeeping happening as well, which is weird to me. Right. Because like anybody who came up in this industry, when I came up in this industry, part of the fun was that everybody makes everything open source and you can go in and you can like, oh, Pytorch. Like I can understand that. And Meta still operates similarly. Like that's why Llama is there. And it's like, okay, like we still can get benefit, we still can get ours, but like, let, let's make sure that people can use these kinds of techniques and like models and whatever.

Ilya Reznik [00:44:54]: But a lot of other companies have started being a lot more closed off, you know, notably open AI.

Demetrios [00:45:02]: I, I like the emphasis on the open partners.

Ilya Reznik [00:45:08]: I think both parts of their name are lion backed.

Demetrios [00:45:10]: Yeah.

Ilya Reznik [00:45:13]: But you know, closed ML is what they should be called.

Demetrios [00:45:19]: That should be their real name. The, the other piece that I think I wanted to talk about is how what should, you know as a staff engineer, like a staff mle, what are things that you deal with and how is that different than the. You mentioned how big of a gap it is from. What is it like L5 to L6 is a really hard gap to jump. And first of all, it's probably worth just clarifying. Staff titles are nine times out of ten at large companies. Right. And like tech forward companies, I don't see a lot of staff titles at either companies with 200 people or like enterprises.

Demetrios [00:46:07]: I don't really see like a bank. I don't think I see many staff titles there. So maybe you can demystify that part for me too.

Ilya Reznik [00:46:15]: I think in a bank, if you make that much money, you have to be a vp. I think they're like legal reasons why you have to be a vp. So more interesting. So that's why they have. Banks have very weird titles. And you know, there's title inflation going on too, everywhere. So like sometimes you look at a small company and you're like, oh, that's a principal engineer and they would like convert to like a faang type company and they'll be like senior, maybe, maybe mid level. One of my peers at Meta was a vp, like an actual VP with like an org at a bank before.

Ilya Reznik [00:46:52]: And I'm like, we're at the same level. Like this seems wrong, but that's how it works. So like when I talk about stuff, I talk about specifically like tech companies, right? Like and, and specifically bigger tech companies. Not necessarily just Fang, like Uber, you know, Microsoft, like all of those companies as well.

Demetrios [00:47:13]: But yeah, tech first companies.

Ilya Reznik [00:47:14]: Yeah, tech first companies. I think that's a good way to put it. And I think again it depends a lot on the archetype. My archetype is a technical lead. So for me what it was is don't do what I did. For me what it was is by day you're a technical lead. You're basically helping everybody on your team to succeed. And at one point I had almost 30 people that I was a tech lead for.

Ilya Reznik [00:47:47]: And at those levels that's closer to like a senior staff level. But at Meta you're never at the level that you're at. You're usually performing at the next level. So everybody is one up from what they tell you. But, but so by day you're doing that and you're making sure that there's right scope on the team, that the projects are going in the right direction. And it's weird because you usually have more information. I've dealt with a lot of like really sensitive information, like the way that we're approaching a lot of new regulations on privacy and ads. Like I had more context on that than a lot of people on my team.

Ilya Reznik [00:48:28]: And so like I had to guide them without like doing things that are, would be illegal, you know, that like telling them what they're actually working towards sometimes. And so that takes, that's a full time job. But in addition to that, at Meta you're expected to contribute still. And so I had projects that were quite a bit more challenging, um, but maybe a little bit less time sensitive. That's the thing that I learned at Adobe, because at Adobe I was working on a really big project and I was a technical lead on it, coordinated like 80 people around it. And, and I being young and naive, I was like, I'm going to take the biggest part of this and I'm going to implement that. And I became a bottleneck so quickly because I, you know, I needed to manage all the pieces and I needed to write the biggest piece and so, like staff engineers, once you get burned by this a couple of times, you start taking on pieces that are a little bit on the side, or if somebody gets really stuck, you're there to unstuck them. Oftentimes, like zero to one projects, you're like, I'm not sure how this is going to work, so let me take the first stab at it.

Ilya Reznik [00:49:42]: And then when I'm pretty sure that we're on the right direction, I'll hand it off. There are lots of handoffs. Like I. At Meta, you onboard onto a new project every couple of months because you know you need to move it forward. Usually, like, you come in as a me because something is wrong or like, nobody knows what they're doing here yet. And so you go in and you move it. And once it starts moving, the idea is that your time is better spent on another project that's not moving right now. So you can hand it off to an E5 and say, I can tell you exactly what needs to happen here in the next couple of months so you can go get started.

Ilya Reznik [00:50:19]: Uh, but I need to go off to a new project. Um, so there's, there's some amount of stress here, you know, and my advice for folks, honestly, oftentimes people look at the levels and it's like numerical, so it's easy to measure your progress. It's like, oh, I'm a six, I want to be a seven. Right. But I'm like, do you? Do you really? Because at bigger tech companies that have gotten their compensation straight, uh, which not everybody does, but, but the companies that have, like, understood what the compensation is, you can make more money as a good E5 than a crappy E6 because you're.

Demetrios [00:50:55]: Wait, how's that work?

Ilya Reznik [00:50:56]: You're going to get better refreshers, you're going to get more bonus. And, and so, like, in the long term, a good E5 at Meta is going to make more than a crappy E6. So if you're gonna, if you're gonna go to an E6 to be a good E6, then yes, but, like, understand that that that's a significant, significantly more work and that's significantly more responsibility. And so when, when people are just like, you know, like, I need the money or whatever, I'm like, so, like, go ace your job. Like you, you will get rewarded for this. You will get a better rating, which will equate to more refreshers, which will equate to bigger bonus, you know, but, yeah, don't do it just for the Numbers do it. Because like, whatever the responsibility is of the next level is what you actually want to do was like, for me, you know, I quit my job in tech, I'm currently unemployed, sorry, self employed. And, and what I'm doing now with my time, full time, is I'm guiding ML engineers through their careers.

Ilya Reznik [00:51:57]: Like, I do a lot of one on one still, and I'm trying to scale that through the YouTube channel, the podcast, whatever, but that's the kind of work I would do if, like, nobody was paying me, as evidenced by the fact that I'm currently doing it with very few people paying me for it. But, but so to me, like, I had to get there. That's. That was the conversation we had in 2020. I was like, no, this is, you don't understand. This is what I want to do. Right. If that's not what you want to do.

Ilya Reznik [00:52:25]: I met a guy at Neurips, really smart guy, has been in the industry since the 90s, knows the insides and outsides, works on Gemma at Google and he's an E5. And I'm like, do you want to move up? He's like, why? He's like, I got everything I want. Like what, what else? Like, what am I going to get as an E6 or L6 at Google that I don't get as an L5? And I'm like, you're absolutely like, that is a very smart way to look at your career because up to E5 you do have to move up. So E5 as a senior engineer at these companies. And so like senior engineer is somebody who can work independently and you can give them a project and say, here's the first couple of things, but you go figure out the rest. And we're trying to work with colleagues that are like that because otherwise we're going to get burnt out and die. So up to E5 it is at Meta and Google and all the companies, it is really up or out, right? Like, if you can't get promoted from an E4 to an E5 in a certain amount of time, you're probably not going to stay in that company for very long. I think you get like five halves or something like that.

Ilya Reznik [00:53:34]: So two and a half years you got to figure out how to be an E5. But E5 to an E6 is completely optional. E6 to an E7 is completely optional. We had two E8s in the, in the Meta Ads. Org, which is a huge org that generates all the money. We did not have an E9. And, and by the way. The way that you interview for an E9 is Mark Zuckerberg picks up the phone and calls you and says, hey, will you come work for us? If you say yes, you've cleared the interview, you know, so.

Ilya Reznik [00:54:05]: So that's how you interview for an E9. So, like, just. Just understand that, like, it gets exponentially harder with every level. E5 is the expectation. Everything beyond that is like, what do you want to do with your career?

Demetrios [00:54:17]: What can you expect from E6 to E7? Because you mentioned how you're mentoring all these folks or you're. You're leading bigger projects and you're help helping unstick projects, and that's like, your sweet spot. I didn't understand if that was your specific archetype or if that's. Everyone is expected to.

Ilya Reznik [00:54:39]: Yeah, that's my specific archetype. Okay. There was another E7 at Meta that I worked with quite closely who basically just owned all of the revenue. And so whenever there was any issue, any, like, outage or whatever, RJ would come in and fix it or, like, coordinate a team to fix it in real time. And so, like, that's the amount of responsibility, right? Like, all of revenue, like, no biggie, Just a couple of billion dollars between friends. And we had E7s who, like, were really deep on what we call signal loss, which is, like, there's a lot of regulation there. There are a lot of, like, things coming out of Apple, for example, right, where they're like, okay, like, you can't use our data for ads anymore or whatever. And so when something like that happens, it's probably an E8 that starts that.

Ilya Reznik [00:55:35]: That's like, okay, this is going to impact the entire company. Let's understand what this is going to be and kind of start setting up initiatives for the next five years. And then E8s delegate to E7s, like, okay, this is a particular stream of work we need to get done. You go find the teams that you need for this and you go, like, coordinate them however you want. But it's basically less and less defined the higher up you go. What you do where, like, an E9 just kind of walks around and says, oh, you know what? I bet I can get us 1% more revenue tomorrow if I do that. And I talked to the guy that I really want to have on my podcast that hopefully will come, but I talked to him about the difference between an E6 and an E9, and I was like, as an E6, I got my teams to increase meta revenue by 1%. In fact, that was Q2 of 23 or Q1 of 23 where our work basically showed up on the earnings report.

Ilya Reznik [00:56:35]: We exceeded by 1% and like my team has increased the revenue by 1%. And so I was like, huh, that's interesting. That's us. But. And his response was like, and E9 does that by themselves. So like you do that with a team of people and E9 does that by themselves. And I was like, oh, okay. But that is different.

Demetrios [00:56:57]: So yeah, this is all fascinating to me especially because I lived and played in startup life for my whole career and the large enterprise is so foreign, especially these tech forward enterprises that are very sought after jobs and there's a lot of people that want to be in these jobs because of the. I think mainly because the earning potential.

Ilya Reznik [00:57:22]: Yeah, the money is incredible.

Demetrios [00:57:24]: Yeah, I don't think anybody make as.

Ilya Reznik [00:57:25]: Much money as I made as an E6 at the Meta.

Demetrios [00:57:28]: Nobody's like my life calling is to optimize the privacy ad spins or whatever the hell you were doing when you were there.

Ilya Reznik [00:57:36]: You see that, right? Like I lasted a little bit over a year. An average tenure is like two years. You know, admitted is a little bit higher. But meta ads, it's a little bit lower. So yeah, but, but like it's, it's a lot of turn and burn. There's, there's a lot of like, don't get me wrong, like I, I'm glad that I had that experience and I think it was a good step in my career and I can, I'm glad that I did it. But it's, there are some people who stay there for a long time, but that's not the, the majority opinion. And, and a lot of us just like churn through one of those and then go to another big tech company and then come back and then, you know, but, but yeah, it's, it's really hard to be in place.

Ilya Reznik [00:58:25]: But I imagine in startups it's pretty hard too. Like I've, I've worked with startups a little bit and it's like, it's like a different company. Right? Like when it grows, if it fails, then you have a problem of like I don't want to work for a continuously failing company and if it succeeds, then it becomes a very different company. It goes from like 20 people to 600 people and suddenly everything is different. Yeah, you have HR.

Demetrios [00:58:49]: Yeah, yeah. You have to navigate politics and you have to navigate the company. Whereas before you could just say I want to do this, like you were saying with the E9 and you pretty much have carte blanche to do it.

Ilya Reznik [00:59:02]: Yeah, yeah. A lot of distinguished engineers actually do come from like early startup engineers like the founding engineer or the CTO of a startup. Lots of them often make those like distinguished positions at Adobe. About the only way that I know to be distinguished was to be acquired and like you were the CTO of the company that was acquired and. But they're like, they're vanishingly few of them I think at Adobe we had 20 when I was there, you know, like it's under a hundred at those levels. But you know, I don't know if Guido is even. Well, he probably is distinguished. But you know, if you write Python like I let you slide, I think that's worth it.

Ilya Reznik [00:59:47]: But. But it's like the guy at Amazon is like the guy who developed Java or something. Like you really have to do huge things in order to get to those positions. Like you don't apply for them. There's no, there's no a job opening on Meta website that says, you know, come be a distinguished engineer. And also you don't need a resume. Like by then you have to give talks. By then there's only one way forward does that.

Demetrios [01:00:14]: No, you're getting courted to give talks. It's one of those type deals.

Ilya Reznik [01:00:18]: Also if you are an E8 or E9 and want to come work at Meta, you basically just have to call them up like it's not. You don't have to wait for a position to be open. They will always hire E8s, A9s, anybody who can perform at that level.

Demetrios [01:00:34]: But well, sweet man. We're going to put all your details on your YouTube channel and all the stuff that you're doing into the description so that anyone who is trying to go on this journey, they can follow along with you and learn a ton from you. The other thing that I want to mention is that you have been super kind and in helping us put together this asset of what it takes to go from senior to staff. And so we're doing that in the community as something that like you said, there's not a lot of information out there. So I'm glad that we can work together on that and hopefully have something see the light of day. It's still early days so you never know if it's actually going to happen.

Ilya Reznik [01:01:18]: But I've got confidence we'll make it happen.

Demetrios [01:01:21]: Well, make it happen. That's what I love. And I also look forward to when you have the podcast with the ex director that you worked with maybe we'll just play it on. On this stream too so that the folks who are looking for part two and you dangled the carrot, we will have it for them too.

Ilya Reznik [01:01:41]: Excellent. Yeah, I'll. I'll definitely, I'll definitely keep. Keep people informed. And I am in the MLOps community Slack too, so if anybody needs to reach to me directly, you can. I sometimes take a couple of days to reply. Sorry about that.

Demetrios [01:01:55]: But what happens when you're unworked? What did you call it? What was the frame? The change of frame?

Ilya Reznik [01:02:01]: Self employed, I guess. Self employed is what I was supposed to say. Right?

Demetrios [01:02:04]: I love that perspective shift. That is a great one. Yeah.

Ilya Reznik [01:02:08]: But I mean, it was a very intentional move. It's not like when I was at Neurips, like there were a bunch of companies who were like, oh yeah, with your experience, we really want you. But I'm like, I want to spend some time, maybe half a year, maybe a year really focused on trying to help people. And maybe that's what I do for the rest of time. Maybe not. Maybe I do come back. But I do want to see how I can scale the things that I know work. Right.

Ilya Reznik [01:02:38]: Like, I've, I've worked with people one on one for a long time. I've worked with my teams for a long time. And it was really. I was helping a principal engineer go through the interview process and he was the first person who like scheduled some insane number of sessions with me. Usually people do like three or four and then they basically move on. And he was like, no, no, no, I need to pick your brain for a lot of things. And then when we were done, he was like, wait, why don't you scale this? And I'm like, I don't know how. He's like, well, go start like a channel or something.

Ilya Reznik [01:03:09]: And I was like, okay. That's the other thing you learn as staff. When a principal tells you to do something, you just like, you don't second guess them too much. You understand how much smarter they are than you. You just go do it. So, so that's where that came from is like, people have asked me to scale this and I'm trying to, you know, and it's still, still early days. You'll see if you go check out my YouTube channel that they're some bells that are a little bit loud and I don't know the last bit of audio editing. Demetrios is helping me, so I'll get there.

Ilya Reznik [01:03:42]: But yeah, definitely excited to be helping so many people and like My biggest video right now has like 14,000 views, and a lot of them are from, like, unique viewers. So let's say 10,000. I don't remember the exact number. I'm like, when have I ever talked to 10,000 people?

Demetrios [01:04:02]: Like, yeah, you're making an impact.

Ilya Reznik [01:04:04]: Yeah, 10,000 people is a concert. It's a small concert. I'm not Taylor Swift yet, but it's a small concert, but it's a concert. And so it really is a great opportunity in our time to be able to help so many people in such an efficient way, manner, so.

+ Read More

Watch More

Trustworthy Machine Learning
Posted Sep 20, 2022 | Views 1.4K
# Trustworthy ML
# IBM
# IBM Research
FastAPI for Machine Learning
Posted Apr 29, 2022 | Views 1.6K
# FastAPI
# ML Platform
# Building Communities
# forethought.ai
Monzo Machine Learning Case Study
Posted Dec 07, 2020 | Views 1.9K
# FinTech
# Case Study
# Interview