Rethinking Notebooks Powered by AI
Speakers

Vincent is a senior data professional who worked as an engineer, researcher, team lead, and educator in the past. You might know him from tech talks with an attempt to defend common sense over hype in the data space. He is especially interested in understanding algorithmic systems so that one may prevent failure. As such, he has always had a preference to keep calm and check the dataset before flowing tonnes of tensors. He currently works at marimo, where he spends his time rethinking everything related to Python notebooks.

At the moment Demetrios is immersing himself in Machine Learning by interviewing experts from around the world in the weekly MLOps.community meetups. Demetrios is constantly learning and engaging in new activities to get uncomfortable and learn from his mistakes. He tries to bring creativity into every aspect of his life, whether that be analyzing the best paths forward, overcoming obstacles, or building lego houses with his daughter.
SUMMARY
Vincent Warmerdam joins Demetrios fresh off marimo’s acquisition by Weights & Biases—and makes a bold claim: notebooks as we know them are outdated.
They talk Molab (GPU-backed, cloud-hosted notebooks), LLMs that don’t just chat but actually fix your SQL and debug your code, and why most data folks are consuming tools instead of experimenting. Vincent argues we should stop treating notebooks like static scratchpads and start treating them like dynamic apps powered by AI.
It’s a conversation about rethinking workflows, reclaiming creativity, and not outsourcing your brain to the model.
TRANSCRIPT
Vincent Warmerdam [00:00:00]: Oh, but how do you add the context? Do you describe the DataFrame? Is the LM supposed to look at all of your code and infer what the column names are and what the types are? The way I thought about notebooks 5 years ago needs to change. If you want to use Colab, that's fine, but you can rethink that. And that should— like, I care a lot about intellectual freedom. If you're a Python person, widgets are probably the most fun way to learn JavaScript at this point.
Demetrios Brinkmann [00:00:25]: Okay, so you told me something that is Very interesting. You guys just got acquired.
Vincent Warmerdam [00:00:30]: Yeah, a few weeks ago. True.
Host B [00:00:33]: So in the span of me seeing you a month ago and now, what's changed?
Vincent Warmerdam [00:00:40]: So I saw you at PyData the week after I was in San Francisco. And as one does, you get into a few Waymos and do that fun theme park.
Host B [00:00:48]: Take some video.
Vincent Warmerdam [00:00:49]: Yeah, and then like up a hill, down a hill. But yeah, like during that week, I also got word that we were getting acquired. The CEO and CTO were really good at keeping that sort of quiet within the company, but it was sort of a fun, pleasant surprise. But the short story of it is that we are still one group. Our roadmap hasn't really changed. We are part of Weights Biases. If you look at the org chart, you've got CoreWeave, which is the company officially that acquired us. Then we have Weights Biases, and then we are part of the— or under the Weights Biases umbrella, I guess you could say.
Vincent Warmerdam [00:01:23]: Say if you look at the org chart. But I mean, again, the team is still the same, our roadmap is definitely still the same, it's just that we are now part of a bigger group. That's the long and short story of it.
Host B [00:01:34]: It makes a ton of sense why Weights and Biases would want you after you're breaking down all this notebook stuff and knowing like how they think about helping data scientists. And what are some things that you're thinking about now? Has anything changed in that regard, like your what you want to build, where you're going?
Vincent Warmerdam [00:01:53]: I mean, honestly, on a day-to-day basis, the main thing that's changed is I'm part of a larger Slack group now. So you learn to sort of ignore more channels and to really like do a little bit of that. Because before with 8 people, like almost every channel is relevant. And now, you know, with a couple of thousand people working there, that's not the case. So that's the— in a lot of ways, that's the main thing. And like one thing that's a little bit funny is I actually knew a bunch of people that already worked at Weights Biases. So one thing you do see is that Obviously, our CEO and our CTO, they talk to some people over on the Weights and Biases side, kind of like very much up the umbrella. And I know some people that aren't exactly up the umbrella, but we're like Twitter buddies.
Vincent Warmerdam [00:02:29]: So then, okay, we are now each other. I'm my gateway to this guy into Marimo, and he's my gateway into Weights and Biases stuff. And a lot of that is just very organically and natural, but that is the main new thing, 'cause we still do the YouTube stuff to do the education thing. We still do integrations with other open source packages. All of that is basically still the same. The one thing is that we did— kind of a funny story. So you know Colab from Google? The number one ticket on the Colab GitHub issue list was that people wanted it to have Maremo support in Google Colab. So we built a thing called Molab, and our intern wrote on the issue tracker, like, yeah, we built it instead.
Vincent Warmerdam [00:03:15]: It's kind of a funny little internal joke there. But we made this thing called Molab, which is basically just hosted Maremo in the cloud. And one thing you can imagine is that we— with the whole CoreWeave backing us, we can do more elaborate things with cloud compute for Molap. That's definitely a thing that we can start dreaming a little bit there. that's like— So How so?
Host B [00:03:36]: I didn't quite follow that. Just basically hosted notebooks now?
Vincent Warmerdam [00:03:41]: Yeah, so we've got free credits with our notebooks that was always CPU-based. But you can imagine we might do bigger machines and maybe a GPU or two, because CoreWeave has a bunch of them. That's like stuff that we, you know, we have in the pipeline. But the thing about the setup that I guess I want to emphasize that's really neat is one way to make Molap better is to make Murimo better. So you can definitely imagine that there's this incentive structure where if we want Molap to be better, we definitely have to keep on investing in the open source thing. And that is the main thing— like, that was also in the back of our minds during the acquisition. We can do a lot of cool stuff, but only if we can find a nice way to keep that open source thing very much alive, because we don't want to change that license. We just want that thing to just keep on existing.
Vincent Warmerdam [00:04:30]: And by having this thing where Molap is, you know, a thing that we can work on, but the way you make that thing better is by investing in the open source thing, that is a very healthy incentive to have as well. So that's, that's the main thing I would like to emphasize.
Demetrios Brinkmann [00:04:42]: Okay, real quick, if you happen to find yourself in the South Bay on March 3rd, we're going to be taking over the Computer History Museum for our Coding Agents Conference. That's right. We're organizing another conference despite me telling myself so many times after we did our AI quality conference that I would never do another conference because of the stress that it caused. Somehow I seem to have broken my vows because the pull was too strong. Some of the notable speakers that we have already announced are Sid, friend of the pod and co-creator of Claude Code, Harrison Chase, the founder of LangChain, Good old Dex, the founder of Human Layer and the dude who popularized the term context engineering and harness engineering. And last but not least, someone I consider a very close friend, Michael Eric, is going to be doing a workshop. He's a Stanford lecturer and actually he was a past co-host of this here podcast that you're listening to, you know, before he got all famous and stuff. Come join us.
Demetrios Brinkmann [00:05:46]: It's intimate by design. There's only 450 people that we're going to let in to the room to try and keep the signal as high as possible. So I'll see you there. Real quick, let me talk to you about Hyperbolic's GPU cloud. It delivers NVIDIA H100s at $1.69 per hour and H200s at $1.99 per hour. And this is with no sales calls, no long-term commitments. Or hidden fees. You can spin up one GPU or scale to thousands in minutes with VMs, bare metal clusters, and high-speed networking.
Demetrios Brinkmann [00:06:23]: You've also got attachable storage, and you only pay for what you use. Save up to 75% less than legacy providers. And oh yeah, by the way, you need steady production-grade inference? Well, you can choose dedicated model hosting with single-tenant GPUs. And predictable performance without running your own hardware. Try now at app.hyperbolic.ai. Let's get back into this show.
Host B [00:06:54]: And what are the synergies with Weights and Biases, the product?
Vincent Warmerdam [00:06:59]: I mean, we're— that's like stuff that we are exploring right now, right? But it's definitely— Weights and Biases has historically always kind of been they've been fairly friendly to the academic community as well. I guess it's a little bit less in Europe, but my impression is definitely it's like a lot of companies in California just use Weights and Biases. It's kind of a somewhat normal thing. So you can definitely imagine some integrations. One of the first things that we've added— so Weights and Biases also has an LLM inference engine. The day after the acquisition was public, we just added support for that inside of Reema Notebooks. We also have very elaborate LLM support. So that's like a click away now.
Host B [00:07:40]: LLM training support or LLM within the notebook?
Vincent Warmerdam [00:07:43]: So Weights and Biases has a service where you can call Llama and a couple of those open— I think they're all open source models if I'm not mistaken. So that was like an easy integration we could just add on day one basically. I cannot talk about the main specifics just yet, right? Because people are working on stuff. But you can definitely imagine integrations happen. Yeah.
Host B [00:08:04]: Yeah. Okay. Awesome. You're like a treasure trove of cool stuff.
Vincent Warmerdam [00:08:10]: I guess one thing that does make Marimo unique is the reactivity, right? But then came along these LLMs. So people were sort of banging on the door like, oh, can you do LLM stuff too? And there is interesting stuff happening there right now. So we, for the longest time, we have agent support. So you can open up a little window and you can give your API key of Claude and then you can do your Claude stuff. What you could do is you could say, oh, there's a DataFrame there. Hey, I want you to write SQL for that DataFrame. Oh, but how do you add the context? Do you describe the DataFrame? Is the LM supposed to look at all of your code and infer what the column names are and what the types are? So that was always a bit of a conundrum. So what we do is you're able to do and then some sort of variable, and then we do our best to add any context we can add to the prompt.
Vincent Warmerdam [00:08:59]: So if you point to a DataFrame, we can automatically say things like, oh, that DataFrame has these columns. It has this many rows. These are the types of each column. Nice. And wouldn't you know it, that makes it a whole lot simpler for the LLM to actually write the SQL or the DataFrame code you want. So that's definitely interesting stuff that we can do. But then I think Zed, the editor, started this thing with ACP. So can we have an agent protocol as well, such that you can just say, oh, I want my editor just have Claude have that be a protocol that just added Claude code or Codex or Gemini.
Vincent Warmerdam [00:09:31]: I think they've also added support for it. So there's an experimental feature in Marimo now that also lets you just run Claude code within Marimo itself. So you can just use native Claude code with your keys and your subscription within Marimo too via the ACP protocol.
Host B [00:09:43]: But it will— wait, ACP or MCP?
Vincent Warmerdam [00:09:46]: So we also support MCP, but ACP is a relatively new thing.
Host B [00:09:49]: Okay. It's a new one. Great. All right. But that just means that now Cloud Code is generating code inside of your notebook.
Demetrios Brinkmann [00:09:57]: Yes.
Vincent Warmerdam [00:09:58]: And you don't have to do the API key thing where every time you make a request, you have to pay for it. You can now use your Cloud subscription. We also have a linter that's specific to Cloud because there's some errors that are really specific to Marimo. You can have a linter trigger every time that Cloud does something, and then we do the linting. And one thing you can't do in Marimo, for example, is you cannot define variables twice. So if there's one cell that has a 1, the other cell that says a 2, We have to make assumptions about what the actual value of A is, so we just don't allow for that. And there's errors like that that we can then again send off to Claude. It can fix itself.
Vincent Warmerdam [00:10:33]: So there's a lot of LLM innovation that's happening too in the notebook is the main thing I would also like to stress.
Host B [00:10:38]: Yeah, tell me more.
Vincent Warmerdam [00:10:40]: I mean, so my dream would be that eventually we might be able to— we're definitely not there yet. That's the one thing I want to mention first. But the one thing I would love is that you could say things like, oh, Actually, I want to have a copy to clipboard button. And that button doesn't exist yet, but we can just generate one that's in any widget that we just want to generate on the spot.
Host B [00:10:59]: So that dynamic UI type thing.
Vincent Warmerdam [00:11:01]: That's the thing I would love to do more of. And you can also imagine— and I need to find time for this. One thing I would also love is that we get these LLMs in such a place where, oh boy, this is a super interesting article. Look at this really cool diagram in it. Hey, LLM in the notebook, can you reproduce this in some way? But add sliders so I can play around. And we're definitely not there yet, right? But that will be a really cool learning experience. Like, okay, this is the latest paper. Oh, that mechanism looks kind of cool.
Vincent Warmerdam [00:11:28]: I want to have a better intuition. Can I have sliders and interactive matrices and just the plot updates based on the thing that's in the paper? And like, what's the quickest path to maybe get And there? another—.
Host B [00:11:39]: On your own data or on the data from the paper? It would either go out there and get that data or— Well, I mean.
Vincent Warmerdam [00:11:44]: The data has to be public. And if the dataset is huge, like petabytes, then— It's going to be a.
Host B [00:11:50]: Little bit more complicated.
Vincent Warmerdam [00:11:51]: It's going to be a little bit harder. But if you're dealing with the intuition, then you probably want to— then it's more about generating the PyTorch code and can we get the phenomenon at least visualized and things like that. Another thing about Marimo notebooks that's good to mention is that these days a lot of them do work in WebAssembly. So you don't need a Python backend anymore. So if you want to have a blog, with the sliders and all that jazz is generated with Python, you can actually do that. The downside is not every Python library supports WASM via Pyodide just yet. But that's also something I would love for people to maybe also have a good caching mechanism, such as you write your notebook fully in Marimo, and then you export it statically to HTML, but you maintain— The dynamics. Yeah, we're not quite there yet.
Vincent Warmerdam [00:12:32]: But via Pyodide, we do see a path towards it.
Host B [00:12:34]: Going back to that dynamic UI generation, What do you feel like would be behind the scenes happening in order for that to be like a reality?
Vincent Warmerdam [00:12:45]: In my mind, the biggest hurdle right now is that some Python developers are a little bit arrogant and refuse to learn JavaScript. And if you refuse to learn JavaScript, it's going to be hard for you to debug JavaScript.
Host B [00:12:58]: We just heard last night in one of the talks from our Agents in Production conference They were saying, okay, what do you think the hardest code to generate is for Claude Code or Cursor? And it had JavaScript, Python, Rust, and people are saying, like, oh, well, the best is Python by far. The best is going to be Python. The worst, I don't.
Vincent Warmerdam [00:13:23]: But—.
Host B [00:13:23]: Know.
Vincent Warmerdam [00:13:23]: I think, okay, my guess is the best is probably JavaScript. The worst is probably Rust.
Host B [00:13:27]: No, the worst is JavaScript. Because there's so much JavaScript on the internet, and a lot of it is really bad.
Vincent Warmerdam [00:13:35]: Interesting.
Host B [00:13:37]: So it just makes it much harder to generate quality JavaScript.
Vincent Warmerdam [00:13:42]: Yeah, so the thing is, I can do JavaScript, but I know that I'm not an expert at it. So the bias that I have is that I do spend more time, like, getting the prompt just right, planning it out really carefully. And with the Python, I kind of YOLO it because I know it's easier to fix. So I've not really had that problem. But again, yeah, I'm— it depends on what you're building too. Like if you're building a web app, that's also a different story than the stuff that I'm doing.
Host B [00:14:04]: But getting back to this, so developers, Python developers reluctantly use JavaScript.
Vincent Warmerdam [00:14:11]: Yes.
Host B [00:14:12]: And you feel like that would be useful to dynamically create these UI elements?
Vincent Warmerdam [00:14:18]: Or even not dynamically, just in general. Weird bit of feedback that I've gotten is like, Vincent, you make all these widgets, why? And the honest answer is because for some reason no one else is making them. And it feels such a natural thing. Like, if you love— I mean, you don't even have to love it. If you work a lot in notebooks, you want to have widgets. They just make your experience so much better. So why am I the only one? I think it's because I, at a very early part of my career, was just falling in love with D3. So I didn't care.
Vincent Warmerdam [00:14:46]: I'll just learn JavaScript because I want to do D3. And I'll also learn Python because I want to do scikit-learn. But I think a lot of people just are very reluctant to learn JavaScript just because of the Node stuff. That's my impression.
Host B [00:14:59]: But it does level you up to be able to go and execute so much more when you know that frontend.
Vincent Warmerdam [00:15:04]: I think so, yeah. Again, it's a mystery to me why someone would not teach himself JavaScript in this day and age where everything is a web app.
Host B [00:15:11]: Yeah, and it's like— 90% generated for you. So a little goes a long way in this case.
Vincent Warmerdam [00:15:18]: Yeah, I would definitely agree. Although, there's also that aspect of it of, OK, should people still learn how to program? And that's a theme that I hear people talk about.
Host B [00:15:31]: That's a little crazy statement in my mind.
Vincent Warmerdam [00:15:34]: Yeah, so I'm actually kind opinion— of of the so I like to do flashcards and things like that, just to sort of— I think if you train your memory, that also makes it easy for you to remember and at least helps with the iteration speed and also makes it a little bit less intimidating. Like, okay, you know the steps to start a new project, you know the steps to like, okay, what can we expect from a document of the browser and what are good methods? But I think also if you just really go deep a little bit, that is also less intimidating because otherwise you get into this self-helplessness of, oh, I can only build stuff if I have an LLM around. Like yesterday, Cloudflare went down for a bit, And like, I'm hoping people were still productive.
Host B [00:16:10]: The least productive day of the year.
Vincent Warmerdam [00:16:12]: Yeah, but does the national intelligence go down whenever Cloudflare is not in because no one can reach the LLM? What kind of weird-ass sort of anti-utopia are we living in?
Host B [00:16:22]: But you know that graph, right? That is like when you start a project, you think, oh, it's going to be so easy. And then you go into the trough of disillusionment. You basically got to get past that trough. Or Troff, I think is how it's pronounced.
Vincent Warmerdam [00:16:36]: [Speaker] Or you have to train yourself to be good at project setup. Like, I remember very early on, I was sort of just playing around with these things. I was kind of thinking, like, okay, let's see if I— can I fully YOLO, and I won't look at the code at all, but can I fully YOLO a web app and have it send emails and Postgres and set all that stuff up? And it did. And then I noticed that whenever I was running unit tests, I instructed it to start with an empty database, Apparently—.
Host B [00:17:01]: So it would just delete database?
Vincent Warmerdam [00:17:03]: Would.
Host B [00:17:03]: The.
Vincent Warmerdam [00:17:03]: It go to prod, delete the whole thing, and run all the unit tests there. And if you just set up the project yourself manually, or if you just have a good template or something like that, that kind of crap just doesn't happen. I think you're already in a much better spot. And my impression is that all these— what's a polite term? Teenage tech bros?
Host B [00:17:23]: Vibe coders?
Vincent Warmerdam [00:17:24]: Yeah, whatever we want to call them. But if you missed that first bit, like how do you actually set up a proper project, That's where the flaw is, not necessarily in the LLM, it's the first bit.
Host B [00:17:33]: That foundation.
Vincent Warmerdam [00:17:34]: I think so, yeah.
Host B [00:17:36]: It is interesting too to see that now, since it is easier to put a front end on things with vibe coding, the expectation has kind of been raised. So now, whereas before you would probably be doing a lot of stuff with Streamlit, Now you can be like, well, let's see if we can do this in React.
Vincent Warmerdam [00:18:01]: Yeah, but I would still be in— especially if it's experimental, I would still be in favor of first doing a widget then. If you're a Python person, widgets are probably the most fun way to learn JavaScript at this point. Because you might still not need the whole Node pipeline, and you might still be something you can use right away together with your Python code that you love to use, right? So that would be the way I would— personally think about it. One other thing, I have a flashcard app that I'm making, and let's say you could do that with something like HTMX such that what we send over the wire is not JSON but full HTML. And every time I see a new flashcard, we send HTML back. I see a new card.
Host B [00:18:39]: Next.
Vincent Warmerdam [00:18:39]: It makes a request. I see a card. That works, but it's not the ideal user experience. The ideal user experience would be, okay, I want to practice 20 cards. Let's send one JSON payload back. I only make the request once. That's in memory now. I do all of my card stuff, and then at the end, I send the request back, because that way, it's subtle, but there's a noticeable time lag because it has to make every single request when it fetches a new card.
Vincent Warmerdam [00:19:03]: In the end, if you care about user experience, you have to care about JavaScript on the frontend. And it's fine if you say, I don't care about it, but if you are product-minded, you have to. That's what bothers— that's something that does bother me in the Python community sometimes. Like, there is a bit of an arrogance for JavaScript, which I plainly find unfounded.
Host B [00:19:24]: Yeah. Well, talk to me more about the LLMs in Marimo, because it feels like you've got other ideas.
Vincent Warmerdam [00:19:32]: Some of this depends a lot on what standards become available. So a big reason that we were able to push for the LLMs the way that we have was that because at some point MCP was a thing, and because at some point ACP was a thing, So in a lot of ways, we are dependent on that. So I can dream, but if we start planning, we have to sort of keep that in mind. A thing that we are experimenting with more now though is not so much, okay, people know how to use an LLM inside of a notebook, but can we bootstrap notebooks from the start? So we have this marimo new command, and you can give it a prompt. And what you are able to do is say like, oh, there's a CSV file right there. Make the first notebook version. It has to do these analyses, and we'll just bootstrap one from the start. And the benefit of it being a notebook is that you can look at all the intermediate results, and it's very visual, you can add charts.
Vincent Warmerdam [00:20:21]: So, it's also a lot easier to spot the mistake. And I think that's kind of the most interesting thing. If you plump everything in an IDE, and sure, you can add some unit tests, but it's hard to see the intermediate results, and it's a lot easier to poke around in a notebook. So that's a little bit more the thing that I'm personally very interested in. what So I actually— Yeah.
Host B [00:20:39]: What that just sparked in my mind is how notebooks are very made for humans.
Vincent Warmerdam [00:20:46]: Yeah.
Host B [00:20:47]: And agents, I was trying to figure out like, well, is there a world where an agent would want to fire up a notebook and try and see things through the eyes of a notebook, which I don't think really makes sense, but there is a world where you're interacting with the agent, and when it's giving you results, you want to see it as a notebook, or you want to be able to change different contexts that it has through a notebook.
Vincent Warmerdam [00:21:15]: So one thing that I mentioned before, right? So you have the @DataFrame thing. You can point to a variable. You can also do @Errors, and any errors that appear in the notebook go to the LLM. You can also do that with a cell output.
Host B [00:21:26]: Yeah.
Vincent Warmerdam [00:21:27]: So if the cell output is some sort of weird widget that we don't necessarily have a good representation for, if the model in the back is multimodal, we just make a screenshot and send that. So when you say LLMs can't see the notebook, well, they kind of can. Yeah. It's just that they're not always as good as interpreting charts yet. Like, that's more the thing.
Host B [00:21:44]: But for data, just the raw data, right?
Vincent Warmerdam [00:21:47]: It's more— Well, you're not going to— Well, if you have a gigabyte of data locally, you're not going to send that to the context anyway. But what you can do is say, okay, let's send the first 5 rows. And the schema. That should already make it a whole lot simpler for the LLM to write the right query, right? So there's definitely this interaction. It's just that we're only scratching the surface because we're just starting out with this. One thing I will say, under the hood, a Marimo notebook is a Python file. So we don't have that weird JSON structure thing that explodes in Git. Like, that's a thing we don't have.
Vincent Warmerdam [00:22:14]: It's a Python file. So one thing that I've been experimenting with is you could build your Python package in Marimo. 'Cause it's a Python file, so kind of what Jeremy Howard's doing with nbdev, like there's no reason why you couldn't do that inside of Marimo. So, an example, I like to write my command line apps in Marimo nowadays, because with Typer you can do like a decorator on a normal Python function. Marimo also supports PyTest, so if you have a cell and there's functions in there, they all start with like test_, then we detect those are probably PyTests, So you can add your unit tests to the notebook as well. And we can detect when the notebook is running in the browser or when it's running from the terminal. So I can define functions in Python that you can run from the command line. And with Typer, I can reuse the same function inside of PyTest, and it's all one Python file.
Vincent Warmerdam [00:23:04]: Oh, and we support uv, so you can add your dependencies on top of the file, and then uv will take care of all the dependencies and the right versions and all that for you too. Oh, and then we add LLMs to the mix. So like, again, like I'm pretty comfortable saying that there's so much stuff still left to uncover, but the main thing that the notebook is really good at is you can stare at it and you can fiddle around with it, you can play around with it. And something about that is very good for the human mind to also understand what's happening under the hood.
Host B [00:23:33]: Yeah, that exploration.
Demetrios Brinkmann [00:23:34]: Yeah.
Vincent Warmerdam [00:23:35]: And that doesn't just mean data stuff, it also means command line apps. There's like the flashcard thing that I mentioned. I need charts to debug the timing between flashcards learning aspect. There's a name for it, repetitive something something. But for that aspect of the app, I need to have charts to debug. And Marimo gives me a development environment that's really good, giving me charts as I'm developing. And it's a Python file, so I can export that one function that I really want in the notebook to my Flask app, and it just works. Oh, and we have LLMs.
Host B [00:24:09]: Actually, a common use case that I've seen with agents to debug them is creating heat maps to figure out if they called the right tools or if they did execute the right tasks.
Vincent Warmerdam [00:24:20]: Now is the time to start dreaming about those things, I guess, is the main thing I want to say. Because the way you about— the way think I thought about notebooks 5 years ago needs to change. Because Python notebooks are better now.
Host B [00:24:33]: No more Colab. Being restricted by this.
Vincent Warmerdam [00:24:36]: If you want to use Colab, that's fine, but you can that. And that should— rethink like, I care a lot about intellectual freedom. My number one fear is that people think if you want to succeed in this field, that the only thing you got to do is like watch YouTube videos and books of other people, and that they stop thinking of new ideas on their own. What I really like about Marimo is it does invite play. So you're able to explore your own ideas a bit more, and it's that intellectual freedom that I find— yeah, we need more of that. We need more people making interesting stuff. You need to produce and not just consume content. We need more good ideas in the ecosystem.
Vincent Warmerdam [00:25:07]: That's how we move the ball forward. And yeah, in my mind, when I first saw Maremo, the reactive nature of it actually corrected me a few times, which is neat. But I also just noticed, like, oh, if I can mix UI with Python code, oh, that's super amazing for rapid prototyping. And oh, I found a mistake in my data analysis there because a slider shouldn't work that way. Okay, great. Like, I'm aware of the fact that I'm like a guy who works at Marimo, who's like making a massive pitch for Marimo at this point. So like, dear listener, try to sympathize. But the main— I do want to emphasize like the dreaming aspect of it is the cool bit.
Vincent Warmerdam [00:25:39]: That's the one thing I want to like double down on.
Host B [00:25:41]: I think that's a perfect place to end it. So that is awesome.
Vincent Warmerdam [00:25:44]: Cool.
Host B [00:25:44]: Yeah.
Vincent Warmerdam [00:25:45]: Thanks for having me in this fancy studio.
Host B [00:25:49]: It looks fancy from this side. Trust me. Yeah, not so fancy.
Vincent Warmerdam [00:25:53]: There's all cables.
Host B [00:25:54]: Yeah.
Vincent Warmerdam [00:25:55]: And cameras.
Host B [00:25:55]: What the viewer cannot This is no problem.
Vincent Warmerdam [00:26:01]: Okay, final pitch then. We have a YouTube channel, there's videos that explain everything, check it out. Check out Molab, we have a bunch of free compute as well, feel free to use it. And we have a cool Discord, we have a great memes channel.
