Beyond AGI, Can AI Help Save the Planet?
Patrick is a machine learning engineer and scientist with a deep passion for leveraging artificial intelligence for social good. He currently leads the environmental AI team at the Allen Institute for Artificial Intelligence (AI2). His professional interests extend to enhancing scientific rigor in academia, where he is a strong advocate for the integration of professional software engineering practices to ensure reliability and reproducibility in academic research. Patrick holds a Ph.D. from the Center for Neuroscience at the University of Pittsburgh and the Center for the Neural Basis of Cognition at Carnegie Mellon University, where his research focused on neural plasticity and accelerated learning. He applied this expertise to develop state-of-the-art deep learning models for brain decoding of patient populations at a startup, later acquired by BlackRock. His earlier academic work spanned research on recurrent neural networks, causal inference, and ecology and biodiversity.
AI will play a central role in solving some of our greatest environmental challenges. The technology that we need to solve these problems is in a nascent stage -- we are just getting started. For example, the combination of remote sensing (satellites) and high-performance AI operating at a global scale in real-time unlocks unprecedented avenues to new intelligence.
MLOPs is often overlooked on AI teams, and typically there is a lot of friction in integrating software engineering best practices into the ML/AI workflow. However, performance ML/AI depends on extremely tight feedback loops from the user back to the model that enables high iteration velocity and ultimately continual improvement.
We are making progress but environmental causes need your help. Join us fight for sustainability and conservation.
Join us at our first in-person conference on June 25 all about AI Quality: https://www.aiqualityconference.com/
Demetrios [00:00:00]: Hold up.
Demetrios [00:00:00]: Wait a minute.
Demetrios [00:00:01]: We gotta talk real fast because I am so excited about the MlOps community conference that is happening on June 25 in San Francisco. It is our first in person conference ever. Honestly, I'm shaking in my boots because it's something that I've wanted to do for ages. We've been doing the online version of this. Hopefully I've gained enough of your trust for you to be able to say that I know when this guy has a conference, it's going to be quality. Funny enough, we are doing it. The whole theme is about AI quality. I teamed up with my buddy Moe at Kalenna who knows a thing or two about AI quality, and we are going to have some of the most impressive speakers that you could think of.
Demetrios [00:00:46]: I'm not going to list them all here because it would probably take the next two to five minutes, but just know we've got the CTO of Cruz coming to give a little keynote. We've got the CEO of you.com coming. We've got chip, we've got Linus. We've got the whole crew that you would expect. And I am going to be doing all kinds of extracurricular activities that will be fun and maybe a little bit cringe. You may hear or see me playing the guitar. Just come. It's going to be an awesome time.
Demetrios [00:01:20]: Would love to have you there. And that is again June 25 in San Francisco.
Patrick Beukema [00:01:26]: See you all there. So, Patrick Bugma and I'm the head of the environmental AI team at AI two within applied science at AI two. And I like lattes from Lighthouse coffee roasters in the Fremont neighborhood of Seattle. And they are, I should say, incredibly good at latte art. And if you know, you know. But check it out if you're in Seattle.
Demetrios [00:01:58]: Lighthouse Coffee welcome back to the Mlops community podcast. I am your host, Demetrios. We are rocking and rolling today with Patrick. And what a conversation this was. It was cool. We got to talk about environmental AI. I didn't know much about this topic. I didn't understand much about this topic.
Demetrios [00:02:20]: But Patrick was able to bring the engineering mindset and his feet on the ground to explain to me how they are doing it at AI, too. And he was so open. It was so cool to see. It was also very nice to see that he is super pragmatic about how he goes about this. Some of my favorite stories that are probably going to stay with me for a lot longer than the next week, week and a half that, you know, my attention span is quite like a goldfish sometimes. But these stories about how they're using satellite imagery to detect when people are fishing in the wrong place and the cost of getting a prediction wrong is so fascinating. And what they do to ensure that they do not get predictions wrong so that they do not have to waste tens of thousands of dollars to get the coast Guard to travel miles and miles off of the coast to see if there is a boat actually fishing where it should not be fishing. Because if you're predicting that there is a boat out there doing some illegal fishing, and then you tell the Coast Guard, and the coast guard goes, and it's actually not a boat, it's just, as he put it, someone building a wind farm or these windmills, then you've got a little bit of a problem on your hand because you ended up wasting a lot of resources and taxpayers money.
Demetrios [00:03:50]: That was just one little piece of this whole conversation. I really hope you enjoy it. And as always, if you do leave a like subscribe. All that fun stuff. I would love to hear in the comments what you thought of this conversation. And let's get right into it with Patrick. I know exactly where I want to start, which is about five years ago, I was very proud of myself because I took this course. It was like one of those MIT free learning courses, and it was called learning how to learn.
Demetrios [00:04:28]: And it feels like you did that same thing except on steroids. Can you break down what your learning how to learn journey was?
Patrick Beukema [00:04:40]: I studied logic, like, early on in Undergrad, which I really enjoyed because it was this universal language for making sense of the world. And I remember one of my professors at the time, he told me that the great thing about logic is that no matter what language, even if you don't understand it, if you can basically encapsulate whatever's being said in logic, and you should be able to, like, boolean logic, predicate logic, you can make sense of it, and you could even understand an alien, is what he said. And that really stuck with me, because it's this effectively, like, if you learn logic, like basic logic, I'm talking about basic predicate logic, you can make sense of a lot of the world. And that eventually led me to a master's at Carnegie Mellon, also in logic, like formal symbolic logic, very far from deep learning at this point. But during my masters, I started studying neural networks, and I did my thesis on recurrent neural networks. This was a long time ago, before Pytorch and Tensorflow. I wrote those nets in Java, if you can believe that. You had to work it very hard.
Patrick Beukema [00:06:00]: It's many thousands of lines to do something that's probably you could do in a few lines at this point. And so I got really into neural networks and especially learning and how neural networks learn, how we can build neural networks that learn, and then how we as humans learn. And so I ended up doing a PhD in computational neuroscience, and I focused on plasticity and neuroplasticity and how neuropopulations patterns change over time. And to your point, how we can, like, accelerate that process. And we don't know a ton about how the brain works. It's kind of a new field, but we know a little bit about motor learning. And so I think it's a fascinating subject, and it's definitely anchored kind of how I approach artificial intelligence and deep learning even today.
Demetrios [00:07:04]: Okay, so you went deep down the rabbit hole with that. Then you got out of school, you said, hey, I like this deep learning thing, and it seems like it's getting easier. You're now doing a lot of stuff. You're heading up the environmental AI team. Right? You're the technical lead of that. And within the applied sciences organization, I would love to dive into what you're doing now, like, what your main focus is, because I think there is a lot to be inspired from by talking to you about what you get to spend your day in, day out doing.
Patrick Beukema [00:07:44]: Yeah, it's very much a privilege, and I consider myself extremely fortunate to have this role, and it's also a responsibility. And the fact that we are where we are, and we, you know, we're at a nonprofit, and. But we're doing environmental artificial intelligence, and it's. It's. It's really for. The motto of AI, too, is AI for good, AI for the common good. And we think about that a lot. So, like, the artificial intelligence that we're building needs to benefit humanity.
Patrick Beukema [00:08:19]: Right. It needs to, like, uplift humanity. That is our central focus, as opposed to, say, you know, creating the best models that can make the most money. Right. That's not really the goal. It's more like, how do we use artificial intelligence today to improve people's lives, basically.
Demetrios [00:08:37]: So can you break down for us, like, what is environmental AI?
Patrick Beukema [00:08:42]: It's a great question. It's also a young field, I guess even younger than neurosciences. There are many different. I should say there are many different challenges within the, you know, within environmental AI in general. We are primarily focused on conservation and sustainability. So there are four teams within the applied science organization, within AI, two. So there's a climate modeling team. They're focused on, you know, very large scale simulations of how the planet is changing over time from a climate and atmospheric perspective.
Patrick Beukema [00:09:25]: There's a wildlife team called Earthranger, and they are primarily focused on how do we first monitor wildlife? And unfortunately, there's a lot of poaching. And so how do we kind of monitor wildlife to protect wildlife for future generations? And then there's a wildfire team, or a team focused on sort of resource man from a wildfire perspective and a fire management perspective. So I don't know if you spent any time on the west coast in the last couple of years, but it's been pretty rough. And I live in Seattle, and we've gotten fairly major smoke clouds that just sit in Seattle for like a month at a time. And that never used to happen. Right. That's kind of a new thing. And it's pretty devastating.
Patrick Beukema [00:10:21]: And obviously, it's. People are suffering. Many people are suffering as a result. And then there's a marine, a maritime intelligence platform called skylight, and they're focused on oceanic health. So healthy oceans, basically. How do we create and preserve healthy oceans? So those are the four areas of focus within AI, too. Theres, of course, many more areas to focus on within environmental AI more broadly, but those are the primary areas. And so we use artificial intelligence in each of these areas.
Patrick Beukema [00:11:01]: Were using artificial intelligence to advance the mission, and so we use a variety of techniques.
Demetrios [00:11:06]: That was exactly where I was going to go with this. Like, what are some of the models? It seems like some of these prediction models are more of the quote unquote traditional ML. Maybe instead of a churn prediction, you're going with a weather prediction, that type of thing. And then I imagine there's probably from knowing you for about the last half hour that we've been chatting, there's probably some really advanced stuff happening, too. I would imagine you're doing some deep learning. Fun. So can you break down all the different AI flavors you're using?
Patrick Beukema [00:11:42]: Yeah, good question. So it's 100% deep learning. That's the only real technology that we're building or that we find to be most relevant, state of the art. We have the luxury and privilege and fortune to work with extremely talented research scientists at AI two. So there's a computer vision team who we work really closely with called prior. There's also natural language teams, NLP teams. Aristo is one of the teams that we work closely with. We're a very collaborative team.
Patrick Beukema [00:12:18]: And I should just emphasize that because although we do create neural nets for bespoke use cases. It's a team effort. The technologies that are necessary for us are. So computer vision is. Is critical, and. And for geospatial artificial intelligence, where we're, you know, we collect a lot of data from satellites, NASA and European Space Agency, and that's a lot of imagery. And so computer vision, but computer vision running in real time at global scale against satellite imagery. And these are all deep, learning based, state of the art models developed in house.
Patrick Beukema [00:12:56]: So everything we do has been developed in house. It's bespoke, custom. Occasionally, we need to invent or create, probably shouldn't use the word invent, but create new bespoke neural net architectures for specific tasks that haven't been well studied. You can imagine that in environmental AI, there may be a lot of problems that haven't gotten as much attention as, say, llms have, for example. And so the research. The technologies aren't quite as advanced as they maybe need to be. And so occasionally, we need to develop new technologies, new deep nets. And like I said, everything is 100% neural network based.
Patrick Beukema [00:13:39]: Basically, going back to the neuroscience kind of background that I have that's really heavily anchored, kind of how I approach neural net or technology development in general. Basically, what I attempt to do, or what I found is most useful when developing new neural nets for a given use case, is to figure out how the best humans in the world do it. The real experts, identify who the experts are, study how they do it, and then emulate how they're doing, whatever that task is in the form of the neural net, like, and it might be a collection of neural nets, or, you know, many different neural nets, or you might have a computer vision nat, and, you know, more like language Nat, and then some bespoke thing. But try to figure out how the best humans do, and then, because that will give you a pretty good architecture for how you should, you know, be building your. Your software.
Demetrios [00:14:37]: Are you ever inspired by nature? And I just asked this because I think I've seen things about how, oh, this drone was much more effective because one of the inventors watched a firefly, for example, or something like that. Or are you specifically looking at humans and saying, I'm going to try and emulate how this works, like, whoever it is the best? Or am I totally missing the point where you're saying, I'm just looking at the best researchers and how they're creating their scientific method?
Patrick Beukema [00:15:11]: So our goal is to solve problems, not necessarily create theory or write research papers. Our goal is to ship software and so when I say we create these neural nets, it's because we're trying to solve a problem. We see a problem, we want to give them the best possible solution, or we want to provide the best possible solution. The best model. Right? And so the best model that generally, what I found in my career is that the best models are usually neural nets that emulate what the best humans do. And in many cases, we might not know exactly how expert humans do a task, but that's generally a good strategy for building a. A strong foundational architecture for solving a given problem. And that's how computer vision originally developed.
Patrick Beukema [00:16:11]: There are loos at this point, and since we've kind of moved from cnns into more like transformers in vision, broadly speaking. But originally, it's kind of like the idea was, let's emulate how the human brain, how visual cortex, understands and processes information. That might give us a reasonable starting architecture for how a computer might do it. Obviously, we've strayed quite far from neurobiology at this point when we think about how GPT four llms work today in the modern day. But that was the original inspiration, right? The sort of neurobiological information processing. And how do you. How do we emulate that in neural net form? And so that's what I'm talking about. And when I say we're solving a problem, we don't necessarily need to create one single end to end neural net.
Patrick Beukema [00:17:08]: That is the one ring or whatever. We can have a bunch of neural nets that are put together in very similar ways to how a human solves a task. Right. Because we're not just NLP machines are not just cv machines, right, like we do or not vision machines or language machines, right? We do a lot. And so that's where I'm drawing inspiration from.
Demetrios [00:17:35]: So while some of us were contemplating the colors of the rainbow, Patrick was over here shipping. That is what you're telling me. So you're solving real problems, shipping real products and making sure that that gets out to the world for actual value.
Patrick Beukema [00:17:50]: So I appreciate. Yeah, yeah. We need to ship. We need to ship faster. Right. There's an urgency to the environmental threats that we face. Right. And so we need to move faster.
Patrick Beukema [00:18:02]: And so I think a lot about how can we speed up? How do we pick up the pace, right. We're a small team, right.
Demetrios [00:18:10]: And speaking along those lines, it feels like there's quite a bit of research that you're doing, but then there's a lot of engineering that you're also doing. And so what is, in your eyes, the percentage of one and the other that you're doing? And what is your ideal percentage? Maybe it's just 100% engineering shipping constantly because you found the solutions, or is it like, yeah, we still want to always be dedicating x percent to R.
Patrick Beukema [00:18:38]: And D. I found that R and D is best when it's fundamentally an engineering task. Even if you're just fine tuning an existing model. Right, like an off the shelf architecture. If you're just fine tuning that, you still have to do some R and D. And what I found works best is when you make the research like that. R and D just another engineering discipline, right? So it's CICD. You take all the best practices that have been developed for decades from the software engineering community, and you just adopt and adapt those to the MLD that.
Patrick Beukema [00:19:23]: Sorry, to the machine learning lifecycle, to the machine learning, like research workflows. And it works great, right. You know, it really is at least what we do, this applied work is, in my opinion, it's engineering. We certainly have to do research, or we work with research scientists who are pushing the absolute limit, the bleeding edge of computer vision. They're literally inventing new ways of doing computer vision for geospatial artificial intelligence, for example. But what we are doing is, I would say it's much more engineering oriented. I find that research can and should be scaffolded or basically anchored by the best practices from engineering. So there is no, on our teams, or anyways, on my team, I should say there's no productionization step, right? Like it's, it's already been productionized.
Patrick Beukema [00:20:34]: The, when you start doing research and when you start doing experimentation, it's already, it's CI CD, there's integrated testing the, you know, you generally want to get your models into a real world scenario as fast as possible, right into the hands of users as fast as possible, so that you can start building out a feedback loop and, and really start iterating on the bottom line, whatever's going to actually move the needle. And so we do everything from the very beginning, from the very beginning of the R and D process. We're treating it like production, basically, if that makes sense.
Demetrios [00:21:21]: Not only makes sense, but it is fascinating to hear, because then you don't get slowed down by this idea of, okay, we've got something here. Now what do we do with it? You're starting with the end in mind.
Patrick Beukema [00:21:36]: And how do you even know if it's good, right? Like how many papers translate, how many research papers actually translate? How many are even reproducible? Right? I mean, in neuroscience, replication and reproducibility rates are abysmal. Right? They're horrible, right? And that's also true for machine learning, where you might expect the rates to be pretty good. Right? I remember Joelle Pinot's keynote from neurops in 2014. I think it was Montreal. She looked at the literally of all the submissions in, I think it was RL reinforcement learning to the conference, to neurips conference, one of the premier ML conferences in the world. And she just, her team attempted to replicate or reproduce those experiments and they looked at 60 papers. The results were very bad. And this is neurips and RL, that's the state of academia and research, right? So if you want something to translate, if you want something to actually work in the real world, right? And that's what we're doing.
Patrick Beukema [00:22:54]: That's not the entirety of research, nor should it be like, obviously there needs to be pure, pure research, like along lines of Bohr, but what we're doing is more like it's use inspired research if you've heard of past errors, quadrant. So yes, there's a quest for fundamental understanding, but in the service of actual humans. Right? So I think if that's your goal, then it's important to have an extremely tight feedback loop. Right, with your users and get that feedback loop as fast as possible, right? Get the models you're building into the hands of users, whoever's going to use that technology and benefit from it so that they can help you and tell you, like, make them part of the process. And this is especially, you know, important in the context of conservation and sustainability, where there's obviously an extreme, unequal distribution of wealth and resources in the world. And, you know, we need to position and put the stakeholders of these models as they need to be central to this conversation, to the development of the technology as opposed to an afterthought or, I don't know if you've heard the term colonial science or helicopter science, where basically a bunch of, like, you know, we come in from our ivory tower and we drop these incredible models and wherever, right, and then we go away and we've saved the world, right, like that. That's not how it works.
Demetrios [00:24:40]: Or at least we think we did. Yeah, right?
Patrick Beukema [00:24:42]: We think we did. And maybe we said we did in the paper, but it turns out that, you know, the f one scores were 95% in the paper and, you know, 1% once they actually, we're tested on real world, out of sample, out of distribution, data.
Demetrios [00:24:58]: Well, not only that, but I think it's fascinating too, where you're like, okay, we're not going to go and hide away for x amount of time working on our stuff. And then until we think it's perfect to show to the world, we're going to just as soon as possible, get it out there, because part of our process is getting people's hands on it and letting them inform us. If this is a valuable, I think that's crucial into what you're saying. If it's valuable, if it's reproducible, like if it's actually working. And a lot of times we've heard on this podcast, it doesn't need to be necessarily like down the vein of the research, but how many times people have built things, they've gone and they've tried to put it out, and it turns out that they didn't understand the briefing correctly. They didn't actually know what the business needed, and so they built something that was completely useless. And if you, the more time that you're spending building, the more time you've wasted when you get it out and you realize, oh, this is actually not as valuable as we thought it was going to be.
Patrick Beukema [00:26:09]: I think the best AI is, generally speaking, a team effort where not only do you have the stakeholders of that technology as part of the development process, the recipients, but it must be a team effort. Truly great AI cannot be built in a vacuum. You need product and engineering and research and the ML folks and the users and feedback and speed.
Demetrios [00:26:37]: All right, real quick, let's talk for a minute about our sponsors of this episode. Making it all happen lattice flow AI. Are you grappling with stagnant model performance? Gartner reveals a staggering statistic that 85% of models never make it into production. Why? Well, reasons can include poor data quality, labeling issues, overfitting, underfitting, and more. But the real challenge lies in uncovering blind spots that lurk around until models hit production. Even with an impressive aggregate performance of 90%, models can plateau. Sadly, many companies optimize for prioritizing model performance for perfect scenarios while leaving safety as an afterthought. Introducing lattice flow AI.
Demetrios [00:27:24]: The pioneer in delivering robust and reliable AI models at scale, they are here to help you mitigate these risks head on during the AI development stage, preventing any unwanted surprises in the real world. Their platform empowers your data scientists and ML engineers to systematically pinpoint and rectify data and model errors, enhancing predictive performance at scale. With Latticeflow AI, you can accelerate time to production with reliable and trustworthy models at scale. Don't let your models stall. Visit latticeflow AI and book a call with the folks over there right now. Let them know you heard about it from the Mlops community podcast. Let's get back into the show.
Patrick Beukema [00:28:08]: It's a team effort, right. And it's very deeply collaborative, and you need all the pieces to come together, you know, and everyone to be really invested in kind of shipping the highest quality artificial intelligence that's possible. Right. That's state of the art in order to actually be, I think, delivering what users need. And again, coming back to the urgency of some of our major challenges in the environment related to conservation or sustainability or climate change, you name it, we need to move fast. We really need to pick up the pace. And so if you want to develop and iterate at breakneck speed, then your engineering must be razor sharp. Right.
Patrick Beukema [00:29:08]: And so we want to iterate really quickly, but we want to iterate on the thing that matters. Let's ensure that whatever those innovations are, are ultimately of service to the community. Right. And being open, being transparent. So we open source all our stuff, of course, and including the data and the training and the weights and the model, et cetera, et cetera. And that helps. That goes a long way, because people can criticize it, people can improve upon it, people can find bugs, right. And people, the users of your technology, can look at exactly what you did and perhaps offer feedback.
Patrick Beukema [00:29:55]: They don't need to be machine learning engineer, AI engineer, or whatever, to look at what you did and teach you or help you do a better job. I think we need to be quite humble, honestly, about what the technology is that we're building, and perhaps steer the conversation a little bit away from, say, Agi, or how close we are to Agi, and more towards, like, how is the technology actually benefiting humanity today, and how do we ensure that there's an equitable distribution of this incredible innovation?
Demetrios [00:30:43]: Now you're speaking my language. There is one thing that I think is fascinating with what you all are doing, and that is, in general, when we talk to people on this podcast, because it is so industry focused, and mostly everybody that we've spoken to before are working at companies, it's very clear when you're working at a company, well, usually it's very clear that the metric you're looking at is, how much money are we saving? How much money did we make for you? I feel like there's a whole different question that is being asked. So I'm fascinated to know, what is the metrics that you're looking at that is it how long people are using these models? Is it, like, lives saved? How do you even calculate that? I mean, I want to go deep down into what are metrics that you keep a close eye on?
Patrick Beukema [00:31:38]: One of the advantages of, say, being at a startup. My wife is a chief AI scientist at a mirror learning, an ed tech startup. So we talk about this a lot and sort of how startups work versus nonprofits versus academia, et cetera. And I will say that it's not easy, right. But it's really critical also. And so there's a couple, I would say, simple strategies that we can use. Like, ultimately, we want to know if our actions are causally improving the bottom line, whether it's conservation or sustainability. It's not so trivial to measure those things.
Patrick Beukema [00:32:22]: So there are, I would say, not necessarily proxies, but good indicators that you're on the right track. Right. For example, if you're. If other people are funding your work, if you're getting a lot of interest in your technology, if people are using it, those are all really good signs and of course, external money, just in the sense that being at a nonprofit, we don't charge people for our. The technology. Right. We give everything away for free. And there are ways to compare our models against others in the industry.
Patrick Beukema [00:33:11]: So we anchor on that quite a bit, and we try to be open and really transparent about our models. Like I said, we open source our work. And one reason for that is that we want to be very clear that this is where we're at. We are not perfect. We need to do a better job. Here are the areas in which we are not doing a great job. Are there other people who have solved some of the challenges that we have yet to solve? There are a lot of advantages to being open, which you might not have. For example, if you're at a startup where you can directly tie sort of, you know, the money to the, to the model outputs.
Demetrios [00:33:58]: I see that. So it's almost like you, by being open, get all different kinds of metrics that you can track, one being how excited is the greater community about what we just put out, another being, hey, did this create more funding? Are people seeing this as a trigger? I imagine there's got to be like a little bit of a tightrope because you don't want to just do it for funding. Right. But if that is something that happens off of the back of what you've created, then you're like, oh, okay, that's another signal that we can look at, and you can tally up all these different signals, which is not necessarily like, how much money did we save the company or how much money did we make the company. It's more like a holistic view.
Patrick Beukema [00:34:45]: It's not simple, I should say. And we are constantly evolving our, or I am constantly involving sort of my perspective on this because I've worked on many different teams, and this is the first nonprofit that I've worked at. And really, at the end of the day, we're just trying to ship high quality, the best, best in class artificial intelligence. And we do have external signals because there are, although there are not conventional leaderboards, in the sense of like there are for llms right now. We do hear from, from other companies, other organizations that our computer vision in some cases, is better at lower, poor quality resolution than their models are at much higher resolution. So that's a good indicator. So, for example, we do a lot of work with satellite imagery, and so we have computer vision models that are taking a bunch of satellite imagery from public datasets. So NASA and European Space Agency data.
Patrick Beukema [00:35:46]: And these are not super high res images. These are like 1020 meters a pixel. And we have these computer version models that are literally running constantly and constant, doing object detection, doing different tasks, and what we've heard from conferences and stuff, because we do our best. Right, to speak and share with the community what we're doing. We've heard from competitors, or not, I shouldn't say they're competitors, but private companies who are building these same models with far better data that we're doing, or they think we're doing a better job. So that's a good signal. It's not a perfect signal, and it's effectively anecdotal. Right.
Patrick Beukema [00:36:32]: But it is a signal. So there's a lot of signals like that. The other thing I'll say is that I love this work. And I was just thinking about when I first joined, I remember.
Demetrios [00:36:47]: They told.
Patrick Beukema [00:36:47]: Me one of the problems that we were going to work on is we were going to, with this low resolution data, we're going to infer that some object, some vessel, say, is in a marine protected area, not where it's supposed to be. Right. But we have this really low resolution data because we get the entire planet at low resolution. But if you want high resolution, like James Bond level resolution, then you need to send a satellite, literally task a satellite, a commercial satellite, to that location and take a picture, like take a high resolution picture. So we do that as well. And we have a budget to do so, but just imagine if you're building a computer vision model and shipping it and it's in production and it says there's a ship out here, and then you're going to, based on this evidence, you're going to literally send a satellite and that's going to be your method of validation for that particular model. You need to be confident. Right.
Patrick Beukema [00:37:54]: And so that's why I was kind of speaking about how critical it is to get into production as quickly as possible. Right. Get into the real world as quickly as possible. You don't want to be wrong. Well, you don't want to be wrong that often, right. If you're having to task a satellite to a specific location, which is not cheap.
Demetrios [00:38:16]: So I kind of have a picture of how this works in my head, but I would love for you to explain the blank parts to me. You have data streams which are satellite imagery. You're getting those from different sources, disparate sources, which are all public datasets. They're low resolution. And you have something like you're trying to make sure that some fishermen or some boat is not where it shouldn't be. Maybe it's fishing where they shouldn't be fishing, or maybe it is just because of the path that you're supposed to be taking. And you have this computer vision model that is saying yes or no. It's kind of like identifying objects and boats.
Demetrios [00:39:00]: And then you can say, well, this one, we have x percent confidence that there's a boat in this area. So we're going to send a satellite and get a high resolution photo. Now, you get that ground truth, it then goes back. I imagine you have some way to reincorporate that data and then potentially retrain the model. How does that look like? What happens then? Once you've got that and you say, okay, this was a hit or this was a mission, we want to update the model to make sure that we don't send out satellites randomly to random places when they don't need to be, or we can get a higher confidence score that this is actually like what it's saying it is. How are you basically continuously training that model? What does that pipeline look like?
Patrick Beukema [00:39:55]: Yeah, I'm a big believer in continuous iteration and continual improvement. And pretty much no matter where you start, right on the performance axis of a machine learning model, if you're continuously improving through whatever means, eventually you will be doing a pretty good job. I think, folks, at least what I've seen is that a lot of teams, a lot of ML researchers and engineers underestimate that feedback loop. A lot of times it's extremely expensive. Right. So clearly, GPT four or whatever is not being retrained with every thumbs down in the app. I don't know how. I think, what are we talking about, like 100 million ish, something along those lines? We're not retraining it on the fly.
Patrick Beukema [00:40:48]: We also have not as large of models as GPT four, but we are running some fairly large computer vision models, which cannot be retrained as part of Ci CD. It just wouldn't be. It wouldn't make sense, right? It would be far too costly. And yet we want to be able to respond. We want to not make the same mistake twice because our users tend to be experts in this space, right? They're, generally speaking, maritime intelligence experts in this specific context of skylight. And so if they say this was wrong, as you said, we want to respond to that. What I found very useful to avoid having to retrain the big, expensive deep net feature extractor, whatever you're using as the base computer vision model is just put a little lightweight teaching model, a little lightweight teaching neural net on top of that model. Very small, just a simple CNN say.
Patrick Beukema [00:41:49]: And the context. A lot of our base feature extractors, we're using swing transformers now, which the field has moved to transformers. And it makes sense. But anyways, you can put a very lightweight CNN on top of these big models that's trained by supervised learning from just literally, like, correct, incorrect. And we have in our platform, in the skylight platform, there's just a simple thumbs up, thumbs down, right? And so that is enough signal to train a lightweight CNN that can be put on top of your big model, right? That is, that simple model can easily be trained as part of Ci CD. I mean, it's super cheap. And we have these things running, and I've open sourced or we've open sourced one of them. It's online, you can check it out, but it's literally just a big net and then a simple CNN on top of it.
Patrick Beukema [00:42:50]: And that part is continuously retrained. And we're only talking about like 100,000 parameters or something, right? Like it's super tiny, but it's enough of a teaching signal to refine the outputs, right, so that we don't make the same mistake again. Or maybe there's an oil platform or a lot of cases, like, there's new infrastructure being built on the open ocean and the high seas, like wind turbines or whatever. And the model just literally hasn't seen in construction wind turbines as part of its training data. Right. It just didn't happen to catch that right when the satellite was passing overhead. And we're seeing a lot more of this, right. Because wind turbines are coming up more and more.
Patrick Beukema [00:43:33]: And so we can just put these little lightweight models on top of the big models, retrain those as part of Ci CD, and then. And make sure we have regression testing and integration testing so that we're not actually getting worse over time. And. And so that's what we've done. In the case of the computer vision models, it's not applicable to all use cases, but it is. I found it extremely cost effective and fairly straightforward.
Demetrios [00:43:58]: Right.
Patrick Beukema [00:43:58]: These are simple nets, like, and they're. They're so light that they literally run on cpu like a GitHub runner. Right. You don't even need a GPU. I mean, it's cheap.
Demetrios [00:44:10]: Do you only have one of these models, or are there a whole suite of them testing for different things?
Patrick Beukema [00:44:18]: In general, we have many different models, and not just for object detection and satellite imagery, but we have many different kinds of data sources. And satellite imagery is a big use case. So I've talked a fair amount about it. But we have tons of different data sources. And not just public data, but commercial data, too. We get whatever data is necessary to help our users. And our users, by the way, in the case of skylight, are under resourced coastal state governments. And so they rely on this technology to protect their oceans worldwide.
Patrick Beukema [00:45:01]: And as I said, they're experts, generally speaking, and what they're looking at. And so the other very useful feature that I've found to help improve the models and continuously improve is to just literally show the user what the model was seeing. Right. Just a simple crop, for example. It seems very basic, but like. And it's harder to do for transformers and for NLP, but just literally show. Just be transparent. Like, here's what the model saw, you know, do you think it's.
Patrick Beukema [00:45:35]: It's correct? And if not, just give us a thumbs down and we'll try not to make the same mistake again. So it's very simple, right. It's very basic, but, you know, allowing users to be part of the improvement process, I think, is really critical, and it goes a long way.
Demetrios [00:45:56]: And so are you saying that you've got that one gigantic model and then these smaller, lightweight models, and you have various lightweight models that sit on top of it. So whatever this gigantic model outputs something, you've got different edge cases that these smaller models are looking for or, like in the case of, okay, there's a wind turbine that's being built. This smaller model knows that and a few other things which it's been briefed on by the experts. Do you have, like, various different models that are catching that output, or is it one that is good enough? Because there's only a few ways that it deviates.
Patrick Beukema [00:46:39]: I see what you're saying. And, and, no, we have many different models, many different big base models that. That are doing these. And, and in general, you know, I understand, and I'm sensitive to the fact that folks, especially in the research and academic community, like, having the one model, like the one big model that can do everything. Right. And I totally get that. And it makes sense. Like, after all, we have one brain, right? But it is from, from what I've experienced, and for our use cases, it makes much more sense to develop bespoke models for each data source or specific use case.
Patrick Beukema [00:47:21]: So each satellite, for example, or each. Let me see each satellite. So we have computer vision models for many different satellites, right? And the data sources can be completely different. So you might be thinking of optical data where it's just like RGB or whatever, but also there's tons of different satellites. I mean, there are like NASA and the European Space Agency and then all of the commercial satellites. There are so many different ways of taking pictures and getting data from the earth, from the atmosphere, from.
Demetrios [00:47:50]: Oh, my God.
Patrick Beukema [00:47:51]: Yeah. So there's tons of different types of data. And I'll just give one simple example. So, one of the models that we have that's open source, it's called Veers vessel detection. So it's nighttime, it's night lights, basically. It turns out that you can pretty easily identify lights from public data on the high seas in real time. Literally, this data exists. So I didn't know that before I joined this team, but it turns out that that's the case.
Patrick Beukema [00:48:23]: And so it's literally sensitive. The sensor on, aboard a veer satellite, it was specifically designed to detect for atmospheric use cases for, like, clouds, basically to look at clouds. That's why it was launched by NASA and NOAa. Um, it was for clouds and weather prediction and those kinds of things, but, um, and moonlit clouds specifically. It turns out that you can repurpose it pretty effectively, actually, very effectively for lights. And, I mean, you can see, it's crazy. It's so exquisitely sensitive. You can see single, like, streetlights on highways.
Patrick Beukema [00:49:04]: It's unbelievably sensitive, but it's low resolution because we're covering the entire planet. So in general, there's a trade off between spatial coverage and latency and resolution. So we get the whole planet multiple times a day and night, actually from multiple satellites. And NOAA just launched another one, actually. So there are now three satellites. These are constantly circling the globe. They're polar orbiting, they're just taking pictures all the time of lights everywhere, not just the ocean, but land too. And so Google has, like Earth at night, and that's where this image comes from.
Patrick Beukema [00:49:40]: And Apple has it too. Actually, I recently noticed on my phone that they added it to their Apple maps or whatever you can see, like the planet at night. These satellites that I'm talking about are the data that produces these pretty pictures just for context. And so these are night lights, basically, so very different from RGB. And I found that, or we found that they really benefited from custom bespoke architectures as opposed to using some kind of conventional grayscale or rgb computer vision model.
Demetrios [00:50:21]: Well, so I understand now much better all these different types of data sources that you have. You're creating models that are hyper custom for that data source because they're going to perform way better than some gigantic model that understands all the data sources. That makes total sense now. And I really like that.
Patrick Beukema [00:50:49]: I mean, we certainly could approach this from a foundational model perspective. And as I mentioned, we work with prior, which is the computer vision research team at AI too. And they build extremely powerful or very performant state of the art foundational models for geospatial cv, for geospatial computer vision. And we use those as backbones in some of the models. But in general, we built every new satellite or use case. If the foundational model works best for that use case, great, right? But our bar or our metric is the performance of the model, not whether we're using a single model for every single satellite, right? So if we can get even a percent better performance out of a custom model or building, you know, we will take that. Right. We really strive for at or exceeding human level performance on these tasks.
Patrick Beukema [00:51:48]: And it's really important in the context of, you know, what I was talking about with maritime intelligence, because if you're an under resourced coastal state government with a pretty tiny budget for, say, conservation and these types of problems, and we're saying that there's a vessel somewhere out at sea which is going to cost tens of thousands of dollars in fuel to even get to, we better be right. Yeah, we better be right. And so we really value precision and really we weight precision more heavily than recall at least for these specific use cases. And we need to be damn near perfect, right, to be effective, to be used. And so if we need a custom model for some use case, then we will build it, right? Or we will, as I said, there's a team effort. We will collaborate together and build a model that's appropriate for that use case.
Demetrios [00:52:58]: This, I feel, is only going to be part one of what I would love to have as an infinite part series because it feels like we just, like, hit the tip of the iceberg, especially because we had a full conversation before we hit record, and we didn't even get into any of that stuff that we talked about before we hit record, which I wanted to talk about. So I'm going to have to have you back on here. I really appreciate you, Patrick, for coming on here, for doing what you're doing and helping out humanity in this way, helping out the environment and good old mother Earth. And I want to just say thanks.
Patrick Beukema [00:53:38]: Thank you so much. I'm very grateful for this opportunity, and I appreciate this conversation. And, yeah, we only have one planet, right? So let's. Yep, let's treat it. Let's treat it well. Take care of her.
Demetrios [00:53:52]: All right?