AI, Marketing, and Human Decision Making
speakers

Fausto Albers is a relentless explorer of the unconventional—a techno-optimist with a foundation in sociology and behavioral economics, always connecting seemingly absurd ideas that, upon closer inspection, turn out to be the missing pieces of a bigger puzzle. He thrives in paradox: he overcomplicates the simple, oversimplifies the complex, and yet somehow lands on solutions that feel inevitable in hindsight. He believes that true innovation exists in the tension between chaos and structure—too much of either, and you’re stuck.
His career has been anything but linear. He’s owned and operated successful restaurants, served high-stakes cocktails while juggling bottles on London’s bar tops, and later traded spirits for code—designing digital waiters, recommender systems, and AI-driven accounting tools. Now, he leads the AI Builders Club Amsterdam, a fast-growing community where AI engineers, researchers, and founders push the boundaries of intelligent systems.
Ask him about RAG, and he’ll insist on specificity—because, as he puts it, discussing retrieval-augmented generation without clear definitions is as useful as declaring that “AI will have an impact on the world.” An engaging communicator, a sharp systems thinker, and a builder of both technology and communities, Fausto is here to challenge perspectives, deconstruct assumptions, and remix the future of AI.

At the moment Demetrios is immersing himself in Machine Learning by interviewing experts from around the world in the weekly MLOps.community meetups. Demetrios is constantly learning and engaging in new activities to get uncomfortable and learn from his mistakes. He tries to bring creativity into every aspect of his life, whether that be analyzing the best paths forward, overcoming obstacles, or building lego houses with his daughter.
SUMMARY
Demetrios and Fausto Albers explore how generative AI transforms creative work, decision-making, and human connection, highlighting both the promise of automation and the risks of losing critical thinking and social nuance.
TRANSCRIPT
Fausto Albers [00:00:00]: So my name is Fausto. I'm an absolute enthusiast in live innovation and technology. I like to connect seemingly disparately ideas and most of the times I'm dead wrong. But sometimes I'm not. And at least we can have an interesting conversation about it. And that is the very human quality that you'll see in this podcast that really counts.
Demetrios [00:00:27]: We are back for another MLOps Community Podcast. I'm your host, Demetrios, and today we're doing a session that I like to call Fridays with Fausto. He has become one of my good friends and every time I talk to him, I thoroughly enjoy it. I love talking to him because he is very transparent about how he's building, how he is trying to use all the new stuff that's coming out in the world. We get into some of the topics like the new image generation models and of course everybody's favorite mcp. Let's jump into the conversation now and as always, give us a review. Leave some stars, do what you can to help the algorithm know. This is awesome.
Demetrios [00:01:18]: Wild. The image gen stuff is absolutely wild. And we wanted to talk about this before even OpenAI dropped. And then I found Rev AI or if you put it all together, it kind of looks like it says reveal, which I have no idea how that's working. So first thing on my mind with the image use case is that I've been seeing a lot of folks in the marketing department take Facebook ads and then they say to their prompt is basically use this photo, but put my product, which is another photo in the ad and add like this text and it will do it so well. And that is highly valuable in itself, but is quite manual if you think about it. So I'm almost extrapolating it a few steps forward and thinking, you know, know what, what about when you set up a pipeline that will just continuously look for the best performing Facebook ads or the best performing ads on different platforms.
Fausto Albers [00:02:32]: Exactly. This was just those thinking about that man.
Demetrios [00:02:35]: And then you're taking those, you're just swapping out your product and then you are throwing them in and testing them on different platforms and seeing how well they work for. For your specific product.
Fausto Albers [00:02:48]: Yeah, it's like real time generation. I mean in anything that we see in Gen AI, I think an interesting thing is like not just on itself, but how, you know, to combine it with existing technologies. And in this case a B testing at scale. Right? Yeah. And before we kind of had make all the content ourselves and then test it. And that is still sort of aiming at User groups. And this makes it possible to aim it at the individual. But one step back, maybe from a bit of like meta perspective, like what Gen I, what generative AI lets us do, is to create the worlds that we are envisioning.
Fausto Albers [00:03:36]: This was literally the first thing that I thought when I was introduced to GPT in GPTs in back in 2022. I was taking a walk with my girlfriend to the forest and I couldn't stop talking about like this idea that we're entering a world where anything that you can envision can be created in the digital world and therefore has an have an impact on the real world. And so that is still hard, you know, even when you know what you're doing, because to translate your creativity to anything useful, I mean your concept is as strong as your ability to explain it, right? To explain to someone else or to an AI. Now, before we're going to touch, I'm sure a lot of interesting topics, but one of those things in the AI ecosystem that can really help there is like super whisper and whisp flow and all kind of things that are bas, essentially helping you to give words to your concept, right? So that's one. I mean you could do that iteratively with any chatgpt kind of tool as well. But yeah, I think that's very interesting because sometimes you sort of have this vision almost this nudge is urge like, oh, this is like the app or the thing that I want to create, but you don't exactly know how to put words with. And it gets more and more important to do that, whether it is to prompt a image generation model or in a more complex workflow where this gets integrated, right? Because making images is nice, that's one thing. But to be able to integrate it into a workflow, say you're building an app with a sick front end in cursor, then first of all you should have your, you know, your idea in intricate project description, which is where AI can help you.
Fausto Albers [00:05:44]: But then it's extremely important because AIs are absolutely fallible, right? To feed the right context information, what to do, what not to do, what libraries to use at the right time. So at inference and with coding IDEs like cursor, like Windserve and many others, they let you granularly adjust this and then image generation becomes one part of that workflow. I mean, I didn't check, but the last MCP I was using for ImageGen was stability AI. But most likely there's already an MCP out there, right? For the new OpenAI 4.0 and native image generation. So yeah, so to make a jumper, how I was using it, I was building this, this website and I wanted to have a chat box between two elements and instead of going to excelidraw and draw it and then point at it and then feed it back to the AI, I actually had it. You know, I made a print screen of the current state, I fed it to GPT, I said like I want a checkbox in between. Make it, you know, design it nicely. And it kind of came back and all the text, everything in that whole wireframe was exactly correct with the chat box added.
Fausto Albers [00:07:01]: Then I added back to cursor. Then I call my 21st fmagic frontend MCP tool draws like this element chooses by itself, injects the code into the workflow and voila.
Demetrios [00:07:14]: Well, let me tell you how I'm thinking about this because you're doing it and again, this is incredible how you're seeing like lift on your workflow in a way. But I think about it as a platform engineer or an ML engineer that is trying to create business value for their company and they're looking for use cases that can add significant value for the company. Right. And so I can imagine if you are a good ML engineer or engineer, even product, it's probably more of the product folks that are going out there and they're talking with the company and saying, hey, how can we plug in AI to your workflow? What is, what are the things that you do on a daily basis that are time consuming? And somebody in the marketing department says, well, I'm constantly doing research on ads and then I'm asking our designer to create different variations of ads. And then I take those and I add them to Facebook or I add them, I add em to X, Y, Z. And with Facebook it's incredible because the creative, the actual image is your targeting in a way these days you don't have to be. It used to be where you would have to try and figure out the exact Persona down to their birthday, their neighborhood, that type of thing. But now it's not as much that because the ads algorithm is so good that it can look at the creative and know, all right, let's show this to these people and then let's see what kind of signals we get and then we'll show it to similar people.
Demetrios [00:09:04]: And so the marketers say the creative is the targeting. And because of that, like we were saying, you can just ab test so much more and you can create so many more variations. Whereas when you were bottlenecked before and saying, I can only create 30 pieces of creative per day and we're paying quite a lot for that whole workflow in time. Now it's like I can just have the human looking at all of these different creatives that are being churned out continuously and making sure that the text and the words are correct or there's nothing funky in it. And then you can imagine the workflow is just clicking a box and pushing it right to Facebook.
Fausto Albers [00:09:53]: So there is still the human in the loop, right in this, in this paradigm. And I think what is interesting to me is that with every new innovation there is as now like the talk about how is this going to be implemented, is going to be a human in loop, is the quality going to be good enough and how is it going to affect industry? Xyz. Now those are valid discussions to have, but I think that it's also interesting to look how it's going to not influence the current state of the world. Like for example, take marketing content. This post of this dude went viral on, I saw it on LinkedIn, but, but he created sort of like Don Draper style advertisements and he said like marketing ads, it's, it's so over. Right. And then I thought, yeah, it may be in the short run, but in a sense marketing is about influencing humans on an emotional level in order to manipulate decision making. Right? Yeah.
Fausto Albers [00:11:06]: Another interesting development in AI and actually Cass Sunstein from one of the authors from the influential book Nudge Nudge Theory, published an interesting paper last year called Paternalistic AI and Choice Agents was picked up by Nature, Google it. It's interesting. And where he argued that AIs are AI agents, are efficient decision makers. Right. And we're going into a world where AIs are making decisions on behalf of their users. Now I think we can see some, some early stage examples from this already happening. But that essentially means that marketing traditionally tries to influence the end user, user, the buyer. But what if the decision maker, maybe not the end decision maker, but in large part is an AI or a set of AI agents? And then how is the traditional marketing still useful if, you know, people make their decisions on what their AI agent is basically suggesting.
Fausto Albers [00:12:18]: Right. Because we know that we're manipulated, we know that we're, that we do, that we do buy things that we should not. Right. So I think when marketing becomes so much more effective, you're also going to see a backlash, especially within the younger crowd of Gen Z that are saying like, all right, I'm going to sort of like offload my decision making because I'm making too many decisions. I don't want to be manipulated, especially no band, you know, more and more capable AI and people understand that stuff as well. Right. So then I think that the whole sort of like Don Draper style advertisement and advertisement may be dead. Yes.
Fausto Albers [00:12:58]: But in a different way that they are suspecting now, not because the images are getting generated, but because it's going to be a whole nother way. Like they probably need to find a way to influence the descriptions and all the, the information that a buyer's AI would. Spider? Yeah, to, to sort of influence that if you, if that makes sense.
Demetrios [00:13:23]: It's almost like the tactics aren't going out of style anytime soon. They're still going to be useful, but the way that we implement those tactics are going to be different. Those are fascinating things to touch on, but I also wanted to steer the conversation in the direction of mcp and just because I've heard some folks in the community talk about how useful it is, but also how it's like, huh, I don't know if I really get it. And then most recently I have been reading a thread in the community from meb, who started this, and shout out to meb who's like talking about using Anthropic API and just having the worst time ever. So I know they're having some uptime issues over there at Anthropic. Godspeed to them. Hopefully they get it sorted out. But it's like you want to try and, and build a service and there's this cool thing that now has a lot of attention, attention like mcp and next thing you know, you're trying to integrate it in and there's these bottlenecks in your stack where you're recognizing like, I, I can't have this as a service for my company if we have specific SLAs, and that's not a specific MCP thing.
Demetrios [00:14:51]: It's more just like a general system thing that, where MCP might not be the point of failure, but then you're recognizing like, oh man, I've got this. I'm. I'm relying on Claude 3.7 and it's just not working.
Fausto Albers [00:15:07]: Now, of course Anthropic came up with the multicontext protocol as an open thing to use. Right. I've even heard that OpenAI is looking to adopt it as well. So those are two separate things. Right? You're talking about the uptime of the model. I think that is something, I mean, as developers, people that use AI for coding assistance. And we've been relying on sonnet since that 3.5 release upgrade, I think in autumn 2024. It's just by some margin, the best.
Fausto Albers [00:15:47]: So 3.7 was much anticipated. 3.7 thinking 3.7 max now in cursor as well. So I feel it might just be sort of overload use. And honestly, I've been actually since I upgraded to a cursor dubbed 0.481, I think it was like the last update, still a little buggy, but they introduced an enhanced and improved model routing system, which is on itself something to be seen in a bigger scheme as well. Because again, here, like, making choices is a hard thing to do. Making informed choices is a harder thing to do. And to be using the most efficient, most capable model for whatever your use case is, that's a hard thing to do. So we probably as a whole, tend to use too much force.
Fausto Albers [00:16:53]: And I'm sure that cursor will improve this capability. And also then sort of walks around the problem if models have downtime, because there's a lot of models that we can use. I mean, Deepseek V3, the new one, Gemini 2.5 Pro, was. I just saw it pop up in my model list in all benchmarks.
Demetrios [00:17:18]: Right.
Fausto Albers [00:17:18]: But. Yeah, yeah, and. And the context window. I mean, because, you know, in the beginning, you know, we were just using it for scripts then. Then a few scripts in your project. But especially when you're like integrating back end and front end, there's. It can be hundreds of different files and all sorts of dependencies that you're relying on. So the context will become really important.
Fausto Albers [00:17:41]: And I'm curious to see how our Gemini is going to change that, if anything.
Demetrios [00:17:48]: Yeah. Because now we know that you might see it does so well on all these different benchmarks, but that doesn't mean much. It is a cool thing to use in your marketing material when you come out with this and it's great for people to share online, but until you actually get your use case in front of it and see how it works and see how. Or even your workflow and see how it integrates with what you're trying to ask it, you don't have the best idea of what it's actually going to do.
Fausto Albers [00:18:22]: Nope. You know. You know that I. That I used to own a restaurant, right?
Demetrios [00:18:27]: Yeah.
Fausto Albers [00:18:27]: And it was inspired on New Orleans, Louisiana, because I thought that, that, that history was amazing because creativity was born out of scarcity. Right. The kitchen, the music, it's. It was all from Very little and that and, and scarcity is the mother of innovation. Right. And at the moment with AI we're in an age of abundance and we're really waiting for another crazy good model to solve our problems instead of being really creative and seeing what we can do. For example, in what intricate way you could use MCPS and rules incursion I'm referring to but in general it's more about making all these different calls and making sure that the right information is being sent at the right moment. And MCPS are a way that you could orchestrate this.
Fausto Albers [00:19:31]: Rules are another way. But and like it feels that we're sort of spoiled and it takes some focus of our ability to creatively come up with problems and really think hard and deeply and longer about a certain problem with a certain model.
Demetrios [00:19:51]: And because there's always some, it's like the grass is always greener. So you're like, well maybe if this model can't do it, then we'll try it, we'll swap it out.
Fausto Albers [00:19:59]: Yeah. And I think there's a lot of voices that say like oh, people are getting lazy and it's going to kill creativity. I don't think so. I mean there's different. People have always responded differently to innovation and you know, the, the naturally curious will explore and will do I think. But is the ever expanding model capability ever going to come to a halt? Well, there is a way to look at it that you know, scaling laws and as in pure pre training. Yeah. I mean it's going to stop ones.
Fausto Albers [00:20:41]: Right. There's only so much energy and money in the world but maybe I'm actually curious to see, maybe that's a good thing because that will force us both, the innovation labs, PEEP, SEQ, OpenAI, et cetera, to do all these different things like latent reasoning without tokens, inference time compute, different algorithms like the ones, the deep SEQ release and, and at the application layer as well. Like how do we make sure that my AI gets in all these calls, the right information at the right time, etc. So while there's no real reason to think that the raw capability of AIs is going to go down quickly, it is interesting to see if, if, if there's maybe a little rest maybe, maybe this is me just, just, just sort of like wishing for, for a little peace of mind to figure out what I should do to get the most out of what is now, you know?
Demetrios [00:21:45]: Yeah. Well the other side of that is things that I've heard folks talk a lot about which In a way could be likened to the term forward compatibility, where you have backwards compatibility, but then you also want to be thinking about forward compatibility. And a perfect example of this is when we had Flores on the podcast, I think a month or two ago, and he was talking about how he did so many janky things in the beginning to try and extend the context window, and then three months later, he basically had too much context window in all the models, and so he no longer had to do that stuff. And he also was thinking, would my time have been better spent trying to optimize other things and then recognizing that the context window was going to get better? And so because you don't have this crystal ball, you don't really know which part of this whole system is going to get better in the near future. So I can let time be the one that optimizes that for me, instead of myself spending the time and brute forcing it.
Fausto Albers [00:23:02]: Yeah. Well, I mean, to answer the question, one should really know what it is that you're optimizing for, right? I think a lot at the moment feels the energy that we spend on learning about new things and learning to work with this new technology. And for myself, it's genuine curiosity and sort of marveling. But I have to admit there's also this sort of race I have to keep up with the peers in the builders community. And everyone is doing it. But yeah, really, we don't really know yet what it is that we're optimizing for to become the best. What I mean, it's only a few years ago we said prompt engineer, and now prompt engineering is still important. But I think it's important in a way that you are able to.
Fausto Albers [00:24:06]: To use the tools to write the prompts for you. Again, same as with the cursor rules. I mean, a setup, a good cursor rule setup for a large project would, in my opinion, require hundreds of rules, like different MVC files. Now, there's ways to have the AI in cursor write your cursor rules and update things that are good and make note of things that didn't work out. So then it is maybe less of a skill of writing those rules, but it is increasingly a skill of knowing how to work with these little tools to. To do what's optimal.
Demetrios [00:24:53]: Yeah. And knowing what good looks like so that if it does break the rules or if it does go off the rails, then you. You have that. But dude, how are you? How have you been playing with mcp? And where have you found it intriguing or. Absolutely mind Blowing if we want to get really out there on it.
Fausto Albers [00:25:16]: Yeah, man, it's. I mean essentially it's nothing new because we could already do it with function calling. Right. And when agents were introduced to Cursor, literally the first thing that I did was to create a tools. Py file and then define the tools that I wanted there and then instruct the agent to use the tools py file. If so that, that was kind of doing the same, but not as good. So what makes MCPS special? That it is yet another abstraction around complexity. It is a unified manner to connect server and client and that makes it well.
Fausto Albers [00:25:59]: And there's a huge open source community evolving, so if you have a need, then you can just, you know, look in one of these forums or on GitHub and you'll find an MCP quite shortly. And that has been quite impactful because so my game is more backend than front end. And when designing apps, there's now MCPS for example, the one of 21st death magic. I highly recommend you check that one out.
Demetrios [00:26:33]: What is that?
Fausto Albers [00:26:34]: 21St Dev is a platform where there is a huge library of components, templates, basically pieces of code, front end code that you could use in your project. Nice. And the MCP tool is, you can call it by backslash UI or it recognizes your intent. But mostly with mcps, if you really want them to use, then call them, you know, deliberately and it will communicate to 21st dev. And based on your description, for example, I want a hero section with nice gradient colors and the text must stream in. Then it will search that platform, find the code snippets and then communicates those back to cursor, injects them and then just implements them and it works really good. Then there's another MCP tool called Browser Tools, works with a Chrome extension and you have to run a little server on your site and, and it will give the cursor AI, the cursor agent, access to your browser console, to your logs, to everything. It can make print screens.
Fausto Albers [00:27:45]: So if there's an element or something that you want to change before then you were like scrambling, yeah, this element or you make a print screen and then make a mark on it and then feed it back to Cursor and. And now you can sort of see what you're making in the front end when you're running it in localhost. And that is pretty, that's been pretty amazing like for someone that, I mean, I wouldn't say I'm able to make production ready apps for front End. But it is another way of letting people see what it is that I'm trying to envision, what I'm trying to communicate. And then, you know, you can go further on that and something that I highly recommend any sort of scraper, ncp, there's a hyper browser that I really like. And so when you're building, you're starting your project, often you're working with new libraries. I mean for example, the new OpenAI agents SDK that also already has its own MCP by the way. But then there is documentation and you can, with this mcp, you can literally just in the cursor check, you can tell it to go fetch the new documentation of OpenAI agents SDK, save it to individual markdown files in a dedicated folder, and then you can have that to create your cursor rule set on that.
Fausto Albers [00:29:09]: And so it can constantly refer to the new endpoint, to the new, whatever instructions that it has. So scraping front end tools and yeah, cool one just for best practices, the GitHub one can like search GitHub, make all your pushes and that's also been in. I sometimes forget it and then I really regret it in the morning, you know. Yeah, so those, those and of course sequential thinking. But I think everyone that works with, has worked with mcps knows the sequential thinking ones. I'm curious to see because I'm using one, but surely there's different ones around and if, if people think what is the best and does it work for them? When do they call it and.
Demetrios [00:29:56]: Yeah, yeah, yeah, exactly. And really it feels like it is unlocking your workflow to help you, like you said, get that idea that you have out from your head onto something tangible. So then you can go from there. It's not necessarily that you're looking for that final product, but you're really thinking about how can I get to something as quickly as possible.
Fausto Albers [00:30:26]: Yeah, yeah. And, and, and what we're seeing is we have, you know, the developer or whatever your position is at one side, then there's this huge ecosystem of options, possibilities, emerging capabilities of AI tools at the other side. And what you need is a router in between and zoomed out like this cursor is an interface of such router. It's not just an ide. And I'm saying cursor because I work with it, but I don't want to say that's the.
Demetrios [00:31:01]: Yeah, it could be windsurf, could be.
Fausto Albers [00:31:03]: GitHub Copilot if you really want GitHub Copilot.
Demetrios [00:31:07]: I don't Know, I think they have agent capabilities now. I mean, I would expect. Yeah, I thought I saw something on that. So. But yeah, maybe, maybe not that one, but for sure. Windsurf.
Fausto Albers [00:31:18]: Yeah, and, and, and, and we're going to see much more of that because just that, just the way that recommender algorithms quote unquote helped us and having the illusion of unlimited choice while only feeding us a subset of it.
Demetrios [00:31:37]: Yeah.
Fausto Albers [00:31:37]: And this, this thing is, is also like a router between all these different tooling. Now I was actually having a, was giving a talk yesterday at a congress of responsible AI which was very focused on environmental and responsible use and the environment. And I was sort of that cowboy that they got in to tell about the other side. I mean, I fully agree with the fact that we should take good care of our Earth and all, but this was about like, how do we have people use AI responsible? For example, if you have a use case that requires text classification, you could use a simple BERT model and that only uses very little resources compared to the heaviest thinking models here. And I think I heard Sam Altman saying something similar as well, that he didn't like the interface of ChatGPT at the current time where there's so many different model options where you have to make these choices again and again. And yeah, you know, the requests can be routed to the right model but with every other abstraction that comes on top of this, that takes away choices that we make, which surely programmers are also saying about coding with AI is, you know, I want to make, you know, these decisions myself. And this is, I don't know, man, I don't know what, what is the best answer, but we are putting an awful lot of trust more and more into decision making. And this is what I referred earlier to that the paper about choice engines and really nicely touches upon.
Fausto Albers [00:33:27]: Well, this Sunshine is a behavioral psychologist and not necessarily a technical dude. He was on point there that we're, yeah, offloading our decision making into AIs. And we're doing that all the time and more and more.
Demetrios [00:33:48]: Yeah, which it almost like atrophies that muscle of critical thinking on it because you, if you're going to trust that it can make the decision for you better than you can, or at least as good as you can, then you now no longer have to spend so much time in that decision making process. And on one side that's a great thing because it frees you up to do other stuff, but on the other side, when you actually have to figure out an important decision or you have to Walk back and look at why was this decision made.
Fausto Albers [00:34:25]: Good luck. Yeah, and it's, you know, I think a lot of people, like non technical people as well, use AI for brainstorming, for generating ideas. But that also is a really slippery slope because if you don't use very specific prompting, and here prompting is important, but again you can automate that as well, then for example, an image generation model will create sort of the same style images. And so your set of choice is already limited. So you're in a illusion of having choice while you're only seeing a very small subset of the possibilities. And same with, I don't know, I have seen a lot of AI generated content on LinkedIn lately.
Demetrios [00:35:19]: It's so bad. And LinkedIn almost encourages it with those replies. They, I think they got rid of that feature, but the replies where it would just be already generated like great job or wow, this is awesome.
Fausto Albers [00:35:35]: Now I think one of the. Because most people are using ChatGPT, right. And what I think that I'm seeing is that people have a certain topic that they think is, you know, viral worthy or they just find it generally interesting or they know something about it and Maybe they let ChatGPT write the entire post of say N100 words or they write their post and have GPT sort of edited.
Demetrios [00:35:59]: Yeah.
Fausto Albers [00:36:00]: But either way, especially GPT4O but also the 4.5 research preview model as well tend to do this really annoying thing that is yada, yada, yada, yada, and then recap it. So here, and then here's the catch. If I see that one more time and it's like a rhetorical question, you know, like, so there, there's a, that's, that's not just a new. Is this a new era? No, it's a new blah, blah, blah. I don't know, like it's, it's, it's sort of like asking itself a question and then answering it. And I didn't look, you know, I'm not a linguist expert, but it doesn't take much AI use to recognize this pattern. So whether it be with image generation or text generation, you see that if you're not giving examples, which is the stage before fine tuning, which I don't expect marketeers to do, right. Or whatever people outside of the technical community.
Fausto Albers [00:37:07]: And there's of course like spiral and that sort of in between, which is essentially sort of fine tuning. But it starts with giving it the input output examples that you want to see. And that requires thinking, that requires critical thinking. And so it's almost like there's a group of people that don't use AI at all, and there's a group of people that use AI, but they use it for too much. And the golden ratio is somewhere in between where you do have to think like, this is how it should look. And with text, I think that's sort of easy to do. I think with images and video it's harder. And also not my.
Fausto Albers [00:37:51]: My game. But, you know, you only have this mental. Mental image then, right?
Demetrios [00:37:55]: Yeah, but also with text, it's blaringly obvious for us. And with images, you can. You used to be able to really see, oh, this is AI generated, but now there's some of these models, like this rev. AI. I'm not sure how you pronounce it. The realism in that model is I can't tell. And I like, pride myself in being able to tell when it was AI generated. But coming back to the idea that you're talking about here, it reminds me of when I was just talking with devanche on a podcast a few days ago and he mentioned this thing called the doorman paradox.
Demetrios [00:38:42]: Have you heard of this?
Fausto Albers [00:38:43]: No, but I love paradoxes, so bring it on.
Demetrios [00:38:47]: For some reason, I knew it was going to be important for you. Basically, a hotel said, oh, we're going to cut costs and we have sliding doors right now, so why do we need somebody to stand at the door and open the door for people? We can get rid of that job. And they got rid of that job. But there were secondary and tertiary effects that kicked in when they no longer had somebody at the door. And I'm sure you can think of them right away. You're thinking like, oh, well, people don't get greeted as they walk into this hotel, and that's a worse experience. But then also, potentially you have more people loitering outside of the. The entrance.
Demetrios [00:39:29]: So then walking into the entrances. It's not as good of an experience. And these things were kept in check or they were done by the doorman, but it wasn't necessarily their job.
Fausto Albers [00:39:45]: This is Murafak's paradox. Oh, no. I mean, not the specific example of the doorman, but Moravec AI scientist, legendary thing back in the 70s, proposed Moravex paradox, where he said that in the future, jobs will not be entirely replaced by AI, but jobs consists. Consists out of tasks and different. I mean, that is even like an abstraction. But there's different tasks in one job. And some tasks can be economically feasible replaced by AI and some cannot. And then what might happen is that if jobs get sort of like split up, then we can see the truly human capabilities specialize on the remaining parts of the job.
Fausto Albers [00:40:38]: Because, you know, whatever realistic an AI can be, and I'm going on dangerous territory here, but one of the most important things that I have seen in the restaurant industry is, you know, when is a customer happy? And mind you, I actually developed an application for the restaurant industry and I learned it the hard way, and I was wrong, you know, because one of the most important things is social recognition. As the species that we are, we are constantly. Yeah. Looking for social recognition, and we can only get that by other humans. But I don't know if this stays true. Right. So there, there, I might be wrong here, but obviously when it comes to, say, the job of a waiter and part of their job is to predict what product the customer will be most happy with in experience wise, then an AI could probably do a better job. But if I am sitting in a restaurant and this really nice sommelier is saying this and that wine is great, if I really look deep into my thought process and my experience, then I trust this entity.
Fausto Albers [00:42:08]: So let's say this woman has knowledge, I perceive to have knowledge that I don't have. So there's a game of trust. And there's also maybe I want to sort of like make her happy, you know, I want to impress her that I choose the right wine. And I don't think I have that with an AI. Right. There's a lot of complex decision making and experience there.
Demetrios [00:42:30]: Yeah, exactly. And the whole reason I brought this up was because of this thought that we're outsourcing too many of our decisions sometimes. Or we. Maybe it's not too many, maybe it's just the right amount, but we're definitely outsourcing more and more decisions.
Fausto Albers [00:42:45]: Yeah, but we do that because we feel that we're where. Well, two things. There's an awful lot of decisions to make. You know, the downside of ultimate individual freedom is that you have a lot of decisions to make. I mean, you can't really see like. Well, you can't. The whole thing that you see happen with like conservative. The conservative right wing populism and such is also like.
Fausto Albers [00:43:14]: Populism in itself is a simplification of complexity and, and therefore the diminishing of decision making. And so there is a need for us humans to make less decisions. Also, it's becoming increasingly more complex to determine what is increasing my social status. And it's just very. It's very hard to grasp.
Demetrios [00:43:46]: Yeah.
Fausto Albers [00:43:47]: And yeah, I Think, and I think Yuval Harari writes beautifully about this, about this effect that AI might have. And coming back to image generation mimicking the real world, take that all the way to a virtual reality. But essentially what he says, we're the brim of an age where people are able to create the worlds that they want to have. And therefore one point that is always being brought up when it's, when we're talking about image generation or video generation, whatever, like how realistic is it? Well, I beg to differ. It doesn't really matter even when there is watermarks, whether is, it's, it's. It's clear the AI generated if men define situations as real that are real in their consequences. THOMAS and Thomas Sociology and if people want to believe stuff, they are believing it. And you know, all this sort of like cognitive biases saying, like, yeah, but it could be real.
Fausto Albers [00:44:48]: Yeah, yeah. I mean this picture might not be real, but it could have easily been real if it fits their worldview. So point being here, like the course, like the creator side is faster than the response. The, the, for example, the European Union almost has to respond to, with policy to new innovation and they're notoriously lagging behind and that will always be the way. But also it might not even be effective at all because I don't think. Yeah, I'm getting kind of lost my train of thought here, but it is that when something is in abundance, it sort of loses value. So if, if AI generated content in whatever form is abundant, it loses value. So that is shining some light on this sort of dystopian negative view that maybe we are perceiving value in the new world of the source, the authority that is creating the content.
Fausto Albers [00:46:07]: And I don't know if that's going to happen, but it seems like a positive thought to think that if generation is sort of free, then something else is going to inject value into that process.
Demetrios [00:46:22]: Yeah, because that becomes the commodity and that is the piece that is like the least human part about it. And so what is going to be the valuable part about that? And where, where is that part going? Back to the doorman's paradox that we were talking about, or as you called it, what was it? Muravec's paradox. You're looking at the uniquely human pieces of this. How can, how are we now extracting that? And it potentially is just that when you walk into a hotel and someone says hi to you, that is the most valuable part of, of it, which.
Fausto Albers [00:47:08]: Maybe so like the, the human connection. So maybe the world, the world Oldest profession is going to be the world's last.
Demetrios [00:47:15]: Yeah, but it, it's funny to think about that as you're, as you're pondering it. It's like something so quote, unquote, simple as engaging another human is actually the best part of the value chain. Like, it's the biggest thing out of all the stuff that that person does in their job or going to the sommelier. It's not necessarily the wines they recommend or getting to know you, it's that you're talking to another human face to face.
Fausto Albers [00:47:53]: Yeah, it's, it's the, the, the human experience and the social recognition. Like, the worst thing that can happen to a human, to a customer in the service industry is to be ignored relative to what they perceive other people are getting for personal attention. If you ignore someone and give a lot of attention, human quality attention to someone else, they're going to notice and they're going to hate it. So, so, I mean, hopefully this, this opens because things are going to get automated and it's like, you know, eating, watching. What is it? Ray Dalio said looking into the crystal ball is. And anyone who's looking into the crystal ball is bound to eat ground glass. So, yeah, don't pin me on it, but we're definitely seeing some very interesting shifts. And it's going so fast.
Fausto Albers [00:49:01]: And I think in your community, people are, I think a lot of people are very open to, you know, to innovation, to change. But I think you have to realize that there's a huge part of society that responds very differently to innovation. And like, you know, like this. I don't understand it. I will ignore it. And yeah, it's, it's interesting times, for sure.