MLOps Community
+00:00 GMT
Sign in or Join the community to continue

From Shiny to Strategic: The Maturation of AI Across Industries

Posted Apr 07, 2025 | Views 11
# AI
# LLM
# RethinkFirst
Share

speakers

avatar
David Cox
VP of Data Science; Assistant Director of Research @ RethinkFirst

Dr. David Cox can formally lay claim to being a bioethicist (master's degree), a board-certified behavior analyst at the doctoral level, a behavioral economist (post-doc training), and a full-stack data scientist (post-doc training). He has worked in behavioral health for nearly 20 years as a clinician, academic researcher, scholar, technologist, and all-around behavior science junky. He currently works as the Assistant Director of Research for the Institute of Applied Behavioral Science at Endicott College and the VP of Data Science at RethinkFirst. David also likes to write, having published 60+ peer-reviewed articles, book chapters, and a few books. When he's not doing research or building tools at the intersection of artificial intelligence and behavioral health, he enjoys spending time with his wife and two beagles in and around Jacksonville, FL.

+ Read More
avatar
Demetrios Brinkmann
Chief Happiness Engineer @ MLOps Community

At the moment Demetrios is immersing himself in Machine Learning by interviewing experts from around the world in the weekly MLOps.community meetups. Demetrios is constantly learning and engaging in new activities to get uncomfortable and learn from his mistakes. He tries to bring creativity into every aspect of his life, whether that be analyzing the best paths forward, overcoming obstacles, or building lego houses with his daughter.

+ Read More

SUMMARY

Shiny new objects are made available to artificial intelligence(AI) practitioners daily. For many who are not AI practitioners, the release of ChatGPT in 2022 was their first contact with modern AI technology. This led to a flurry of funding and excitement around how AI might improve their bottom line. Two years on, the novelty of AI has worn off for many companies but remains a strategic initiative. This strategic nuance has led to two patterns that suggest a maturation of the AI conversation across industries. First, conversations seem to be pivoting from "Are we doing [the shiny new thing]" to serious analysis of the ROI from things built. This reframe places less emphasis on simply adopting new technologies for the sake of doing so and more emphasis on the optimal stack to maximize return relative to cost. Second, conversations are shifting to emphasize market differentiation. That is, anyone can build products that wrap around LLMs. In competitive markets, creating products and functionality that all your competitors can also build is a poor business strategy (unless having a particular thing is industry standard). Creating a competitive advantage requires companies to think strategically about their unique data assets and what they can build that their competitors cannot.

+ Read More

TRANSCRIPT

David Cox [00:00:00]: So my name is David Cox. I take my coffee black Nespresso style, though, so I don't actually make it. I just push the button.

Demetrios [00:00:08]: We're back with another ML Ops community podcast. I'm your host, Demetrios. And talking with David, we got in to the ways that you can look at machine learning, specifically unsupervised machine learning, to help you change the way that you interact with this world. Behavioral economics was something that I learned, and I hope you do, too. Let's get into it with him right now. And I will say, just as a disclaimer, this was a different kind of philosophical conversation. We did not get super technical on how you can deploy these models and what kind of specs we're looking at, what kind of QPSs and all those other fun acronyms that we like to use. But I still had a blast talking to him and going on all kinds of tangents, and I hope you do, too.

Demetrios [00:01:09]: We should probably start with, you're in Florida and you're wearing sweaters. How is that possible?

David Cox [00:01:17]: Yeah, Theory of relativity applies even to temperatures, I guess, right?

Demetrios [00:01:23]: Yeah. And I'm in Germany wearing T shirts.

David Cox [00:01:27]: Yeah. Yeah. What is the. What is the temp there for you these days?

Demetrios [00:01:30]: I guess I do Celsius. I have been converted. So it is zero. And the last couple days, it was like minus five, which, for the people out there that do Fahrenheit, obviously 0 is 32, but minus 5 is like. Yeah, in the 20s, I think high 20s, mid 20s.

David Cox [00:01:51]: That is frigid. I don't think I'd leave my house if it were that cold here in Florida.

Demetrios [00:01:56]: Yeah. It's funny how you get used to it. It really is.

David Cox [00:01:58]: Oh, yeah, absolutely. And, I mean, I grew up in Colorado, and a lot of people, as soon as they hear that, they go like, wait a second. But, yeah, I don't know. I've been in Florida too long, I guess.

Demetrios [00:02:06]: But you. You got acclimated to it. Man, that's too good. Well, the interesting things that I want to talk to you about are not at all around LLMs. I think we started off this conversation and we really wanted to have this conversation because there is so much more happening in AI than just the LLM boom and the agent boom and the whatever. Insert your next type word boom in here. Right?

David Cox [00:02:36]: Yeah.

Demetrios [00:02:37]: What are you working on these days?

David Cox [00:02:39]: Yeah, so I work, and it may help to give a little context on my background. That may help. So I got into AI mainly from, like, behavior science space, technically, like, clinical work. With humans. But you can think about what we're doing as more like behavioral ecology, but applied to humans, that is, you know, you look at the environment around people. How might you change that to change behavior? Like workplace settings. Right. Or clinically, things like that.

David Cox [00:03:06]: And so then, you know, how I got into AI, stuff I'm working on now. As you can imagine, the environment is incredibly rich. All sorts of stimuli that we perceive respond to influences our behavior. And mid 2010s, all of a sudden, a bunch of research started coming out. AI is better at doctors than X and Y and Z. And so I was, you know, researcher interested in human decision making. I was like, what is this thing? And that kind of pulled me over into again, kind of where I'm working now is, you know, how can we take information from the larger environment, sensory modalities, wearable technologies, things like that, and from those data, understand why people do what they do, and then use that to then help them make better decisions, healthier decisions, live happier, healthier lives. So.

Demetrios [00:03:51]: So it's not thinking about what the diseases are from, signals from wearables or from blood tests and all that. It's more why are people deciding to stay on the couch or eat those.

David Cox [00:04:08]: Exactly.

Demetrios [00:04:09]: Foods that. That pizza that I love, why do I choose that on Saturday instead of some broccoli and hummus?

David Cox [00:04:16]: Exactly. Yeah, that's exactly right. And what's kind of fun about that? In the area of kind of psychology that I got my training in behavior analysis, there's this kind of pocket of literature called, say, do correspondence. Basically, the idea is that we don't, we don't always do what we say or what we. You know, if you were to ask me, why do you eat broccoli or pizza instead of broccoli, I might say something. But that doesn't always match up with my behavior or the reasons that I. Why I might actually do something. You know, if you look at the data, you really get into it.

David Cox [00:04:46]: So that's kind of this really interesting dichotomy kind of circling back to LLMs, right? If language, text based, interesting, but may not give us all the information we need to understand, you know, why do you do what you do and how can I use that to help you make better, better choices?

Demetrios [00:05:00]: So what are some things that you look at?

David Cox [00:05:03]: So it's interestingly enough, and this is kind of where I also touch point. Mid 2010s kind of brought me to AI reinforcement learning. I'm guessing a lot of listeners are familiar with there's a whole Pocket of biological literature. Reinforcement learning with biological organisms. Been around, you know, 100 years, 150 years. And so we look at a lot of the same stuff. You know, you're a rat in an operant chamber, you're a human, you're know, scrolling through Twitter, whatever. There's going to be a bunch of stimuli that are kind of presented to you that comes before some behavior you engage in.

David Cox [00:05:35]: After you engage that behavior, something happens. Right. You know, I make a tweet and someone clicks the like button. And that kind of unit that's antecedent behavior, consequence chain is what we kind of talk about. You can analyze those over time. And from that I can start predicting, you know, based on the behavior you engaged in, what's a reward or a reinforcer for you. And then I can start using that to predict what you'll do next. Twitter, meta, they all use the same algorithms for human behavior, right? Yep.

David Cox [00:06:02]: We're just talking about doing that in context of like, health behavior. Yeah.

Demetrios [00:06:06]: That's why I have so many notifications when I sign on to LinkedIn.

David Cox [00:06:10]: And you can't seem to not get distracted by them because they're so good at getting your attention.

Demetrios [00:06:15]: Yeah, it is killer. And so then how are you trying to help people make the right decisions from that? And, and, and basically, like, I still think, break it down. Like, the data that you're gathering is more on how many steps I'm taking and.

David Cox [00:06:35]: Oh, sure, yeah.

Demetrios [00:06:35]: It could be like it's more exercise or deals with the body or it's also how much screen time I have.

David Cox [00:06:43]: Yeah. Could be all related. Right. So there's, you can imagine.

Demetrios [00:06:47]: Yeah.

David Cox [00:06:48]: I'm not sure how much you sleep 24 hours in a day. Minus hours you sleep, you get that, that many hours of behavior. So you can think. Within this kind of total set of behavior, there's really, it's called the matching law. Known principles of biological organisms. You tend to allocate the amount of time to an activity that corresponds with the amount of reward that you get from that thing. So, you know, if I'm looking to, say, change your physical number of steps that you take, physical activity levels, I would look and see, you know, how much time do you walk now? What else do you do in your day? What are those kind of things that you find rewarding? Netflix. Right.

David Cox [00:07:28]: Or whatever, you know, maybe scrolling Twitter, all that kind of fun stuff. And then the question becomes, is there a way that you can contact more reward for physical activity than maybe you do from, say, Netflix or something? So you can a lot of the. And this, you can imagine, has a lot of therapeutic implications. Um, you know, if I'm engaging a lot of substance abuse, I spent some time doing some research in that area. I want to get you to shift away from, you know, cocaine to something else that might be a bit more healthier. So you try to figure out, what can I create to compete with the reward that you get from that maybe unhealthy behavioral pattern. Too healthy, you know, shift it to some kind of healthier behavior pattern. But, you know, now we're also talking like, these are data issues.

David Cox [00:08:11]: Right. Amazing. In theory. How do I get the data for you that allows me to really understand your day? And fortunately for the line of work I'm in, the last 10 years has been incredible. Right. Technology just allows us to collect so much data on so much more than we could even 15, 20 years ago. But still, non challenge.

Demetrios [00:08:28]: Apple watches in the Whoop.

David Cox [00:08:31]: Yeah. Oh, yeah.

Demetrios [00:08:33]: Yeah, me too. But the. The part that I am not clear on still is how are you proactively trying to associate more dopamine or serotonin or whatever is released in the brain when I exercise more or when I take more steps? How do you make that connection?

David Cox [00:08:54]: Yeah, yeah. So there's kind of a fundamental. Some going back to that idea of the matching law that behavior flows, where reinforcement flows is kind of the fun phrase in there. Right. So if I can look at your behavior across the day, baseline, before I'm doing anything, I can roughly see, you know, you spend three hours on Netflix, 10 minutes walking. There's a lot of value in Netflix. What's going on there? How can I take something of that and provide it for you can only access it, let's say, through physical activity. One kind of popular phrase for these kinds of interventions is contingency management.

David Cox [00:09:29]: Very simple. A lot of research settings. You know, maybe you'd prefer to sit on your couch. But what if I give you $20 for hitting your step count today? What if I give you $50 a day, a hundred dollars a day? Right. You can kind of start increasing the value and kind of putting on these extra reward contingencies they're called. Right. So they don't occur in the natural environment, but you can supplement, augment, add, add onto the reward value for the healthy behavior to get it to shift. And then clinically, usually the challenges are like, amazing.

David Cox [00:10:00]: You can get changes in physical activity for, like, less than a dollar a day. You can get people hitting their step counts.

Demetrios [00:10:05]: Wow.

David Cox [00:10:05]: The challenge then becomes kind of like what you were leaning into a little bit, I think is amazing that I'm now walking, but do I now find walking intrinsically rewarding? So you can actually fade out those extra rewards and things I've added on there. Um, some people that's easier. For others it's more challenging. Um, there's kind of research ongoing there. That's the basic idea how you kind of analyze it and think about it.

Demetrios [00:10:31]: How are you reaching out to people? How are you setting up these? Do I give you. Do I just say, hey, whatever app it is, you have unlimited untethered access to all of my data. Like the woo and my screen time and everything that I'm doing to help me become a better person from goal. Goals that I've set.

David Cox [00:10:52]: Yeah. Wouldn't that be nice if you would give me that. Yeah.

Demetrios [00:10:57]: Honestly, like, I don't even care about any data privacy. But we could go over how you are keeping my PII clean.

David Cox [00:11:02]: Yeah, yeah, yeah. No. And I should say, like at this point, data challenges real thing. Most of the work that I get into is in clinical settings. So like behavioral health settings where we have people coming in for therapy. I then know the totality, what's going on in that session. Right. I'm presenting maybe different learning trials.

David Cox [00:11:21]: They might be called things like you're trying to work on, I don't know, improving your speech articulation. Right. Someone that's seen like a speech therapist or something, bring them in. I know the behavior I'm trying to change. I have all the data because they're in my context on, you know, when I'm presenting you different words to say. How exactly are you saying the back? If we're working on like different pairings, chains of sounds that you might emit. Right. What does that look like? Where do you start to make mistakes and things like that? So much easier to get the data that you need in the types of clinical settings that.

David Cox [00:11:52]: That I'm kind of working.

Demetrios [00:11:54]: Yeah.

David Cox [00:11:54]: Like pro health behavior, daily life. I've set up a few systems for myself just with my whoop time, tracking screen time and things like that. You know, maybe there's a product played down the road. I don't know. I'm a researcher scientist first and foremost. But yeah, the data challenge is a real deal. But most of the stuff. Yeah.

David Cox [00:12:12]: Is a clinical setting. It's kind of.

Demetrios [00:12:14]: Well, because if it does feel like what I want needs to have such a 360 view of who I am.

David Cox [00:12:23]: Yeah.

Demetrios [00:12:24]: And it needs to understand all of my habits. The Good, the bad, the ugly. And I think about how could I have a product in my life that has almost like God mode?

David Cox [00:12:38]: Yeah, yeah, yeah, that's exactly right. Yeah.

Demetrios [00:12:41]: And you're doing it from your side, which is almost like the only way that it can happen. Right. I know. I've had friends who would plug into their Oura Ring API and then create different scripts that could run off of that, which feels like the only way. It's like very hacker esque right now.

David Cox [00:13:00]: Oh, it's incredibly hacker esque. Yeah. And that's kind of what I've done in my own life too, going back this idea of like the matching law. Right. So I've been for 15 years tracking Wake up to when I fall asleep. How do I spend my time?

Demetrios [00:13:11]: Wow.

David Cox [00:13:12]: Whoop. Data. Right? Same thing. I have custom scripts pulled in Strava and all that fun stuff. Peloton. But it's hard. I know there's still gaps in the day that are missing. And this is where again I get really excited about, well, kind of what's possible five years from now, ten years from now.

David Cox [00:13:28]: And how so much of this stuff. Right. You think physiological data, that's not really an LLM call, that's not like text based, it's just raw. You know, physio tensors make this beautiful to play with. And so it's. Yeah, but you're exactly right. It's God mode. Right.

David Cox [00:13:44]: How do you get. Get data on enough stuff? Maybe you don't need data on everything, but on enough things from my life that I can impact those things that are most meaningful to me. When, you know, when I think about what are my values, like what does it mean to live a good life, you know?

Demetrios [00:13:59]: Well, especially if you're like me and you try and set goals, I'm always trying to get better. And I imagine if I knew a lot more about what you know about, I would be better at giving myself the rewards to create those habits. Like reading atomic habits was not good enough for me to become the perfect man that I'm trying to be. Yeah.

David Cox [00:14:23]: Yeah. Oh.

Demetrios [00:14:24]: Sadly that did not work. But if I, if there were the possibility of an app to be able to understand me and have access to everything and then help me with the goals that I'm trying to get. Like just a simple thing that I'm trying to do these days and I find it very difficult because it's breaking habits and putting new ones in is read before bed. I read when I wake up really easily. But reading before bed for some reason Is really hard. I would prefer to scroll TikTok.

David Cox [00:15:01]: Yeah, that's fair. I mean who wouldn't prefer scroll TikTok.

Demetrios [00:15:06]: Yeah, you can't follow me for it. But still like I want to. Yeah, I feel like I could do it.

David Cox [00:15:12]: Oh yeah. And I mean there is a whole like body of scientific literature called self management comes from the same behavior science literature. How can we, rather than having others impose these rewards for us, how can we do it for ourselves? And there are a bunch of strategies, you know, like off the cuff someone might throw out, like, you know, when you go to bed, rather than have your phone next to you with your alarm, it's across the room. So I set my alarm, I set it down and then that even that increasing the effort to go get it to then scroll TikTok. Probably going to prevent me, especially if my book's right here. Right. So like setting that differential effort is like one strategy to make things, you know, most people choose the less effortful option.

Demetrios [00:15:53]: Oh, that's great.

David Cox [00:15:54]: There's a whole rich literature. What's kind of fun about it for me is going back again like the Seydou correspond, all that kind of fun stuff. We've known about all this stuff for decades. Implementing it is hard. Getting data on your own behavior can sometimes be hard. And we also don't always. We're not always aware of the reasons why we might pick that habit we're trying to break. You know, I want to do something different, but I find myself doing the same thing.

David Cox [00:16:19]: I also have a sweet tooth every night after dinner. Regard. You know, I can't break that one. But you know, you start collecting data and analyzing your behavior over time and you can start to understand, ah, you know, it's X, Y, Z. These are the reasons why TikTok is so appealing to me before I go to bed. Yeah.

Demetrios [00:16:36]: And so how does this play into AI? How are AI for these type of insights?

David Cox [00:16:44]: Yeah. So again, going kind of back to the clinical space because that's where I spent most of my time. I've done some of this with my own kind of quantified self data, but more, more in the clinical space. So you imagine you have a lot of this data. Again, going back to this idea of clinical decision making. There's a lot of stuff that's come out. I'm not sure if you read the book Nudge, long story. Cats Sunstein and Richard Thaler, I believe.

Demetrios [00:17:11]: What was it about? Because I feel like.

David Cox [00:17:13]: So the basic idea. Yes. This pocket of research, behavior science, a slightly different area, but similar as ideas called behavioral economics. Basically the idea is that, you know, traditional economic theory, all humans are rational agents. We optimize, we always choose the best course of action. And that's, you know, as we make decisions, we are, we're always optimized, optimizing, maximizing. But, you know, you look around at humans, I don't know, you know, we smoke, we drink, way too. We, we make a lot of suboptimal decisions is like the fancy phrase.

David Cox [00:17:46]: And so the question this book nudge is, you know, what are all the ways that we might nudge ourselves to move, quit making these suboptimal decisions to make better decisions for ourselves and whatnot that you can imagine? That's also been applied to the clinical decision making literature. Right. There's one study, 2016, third leading cause of death in the United States was physician errors. Right. They made the wrong choice, somebody died. Yeah. So, you know, going back to this idea, we have a lot of data. People are in these clinical contexts, healthcare professionals, they want to do the right thing, but they may not always make the best choice, and they may not always be aware of all the data and the things around them that could inform their decision.

David Cox [00:18:27]: So our AI comes in and the things that, that I get excited about, and we have one thing that we built that's patent pending. You can start. You can imagine I have this data. One thing, first thing I can do, unsupervised machine learning, right? All these different patients, let's just start finding different kind of patient groups, patient cohorts, patient profiles, whatever you want to call Persona and marketing. Same basic idea, right? Different patterns of clinical presentation that might then inform how I intervene, how I prevent some kind of clinical intervention. So unsupervised machine learning comes in there, allows us to identify really interesting groupings. Next phase, usually in some kind of healthcare setting, I'm trying to maximize patient outcomes. Classic supervised learning task.

David Cox [00:19:09]: All right, I have this kind of a patient profile. Given everything I know about their clinical presentation and where I want them to be, what can I do clinically as a therapist to help move behavior in that direction and not in some other direction? Again, clinical decision making, helping make optimal decisions versus suboptimal decisions and even those in and of themselves. You can imagine you're starting to probably wrap around all sorts of different ML models, chaining ensembles together. Most people, when they enter some kind of therapeutic context, it's rarely one goal that they're working on. They may have 10, 20, right? So now you like hierarchical things you have decision trees. I mean, it gets incredibly exciting. But this is, you know, going back to the kind of main theme that we want to start talking about. There's so much complexity there.

David Cox [00:19:55]: And I also need this thing to be controlled, compliant, transparent, all those fun things that you don't necessarily need. An LLM for probabilistic outputs is also dangerous in healthcare settings. Right. I need it to be discriminative. AI needs to be very good at what I'm asking it to do. Yeah. And that's. Yeah.

David Cox [00:20:12]: A lot of the work that I spent my time doing the last ten years or so.

Demetrios [00:20:15]: I imagine there's a lot of room for folks to come in and either say that they've changed, when they haven't, or they have changed. It's just they change for a week and then they fall off the wagon again.

David Cox [00:20:33]: Yeah.

Demetrios [00:20:35]: Like weed those out.

David Cox [00:20:37]: Yeah, yeah, absolutely. So for the first one, for those that may say they any. And they may generally believe it. Right. I feel I'm a new man. Right. I'm new person getting data on that behavior.

Demetrios [00:20:50]: That has been me. I will say, like, just so we're clear.

David Cox [00:20:54]: Yeah, yeah. I'm. I'm off my sweet tooth kick. Right? Yeah. Yeah.

Demetrios [00:20:57]: I started ice baths and now I'm a new man. I'm ready before bed. It's all good.

David Cox [00:21:03]: Yeah. And. Yeah. And so in that context, you know, therapeutically, if I was your therapist. Love that you're telling me that every day, though, I'd be asking you to give. Collect data on your behavior in some capacity, or we'd figure out a way so you could say that. But then we'd look at your data and say, well, but your behavior, like that was a spike. Your.

David Cox [00:21:21]: Your behavior hasn't quite changed to the level that you were after or whatever. So that's kind of one way. The other thing that you're talking about is this idea. A popular phrase that people might be familiar with is like relapse. Right. Yeah. I got better. Then I went back to my old patterns of behavior.

David Cox [00:21:39]: This is a classic behavioral pattern studying in the behavior science literature, you kind of think of like almost like a time series forecasting challenge. Right. What are the variables in the environment that when they combine, allow me to predict that you're going to relapse versus instances that you won't. And then you can start again. Same idea. You need the data. But I can start saying again, amazing that you said you took ice baths. You have this pattern of behavior in your history.

David Cox [00:22:04]: Demetrios. I Can tell you're going to relapse next week. You know, let's let you stay on the straight and narrow or, you know, add in these additional therapeutic resources or whatever to get you over the hump this time so you won't relapse or whatnot.

Demetrios [00:22:17]: It reminds me a little bit. Do you ever feel like it's a little Minority Report esque where.

David Cox [00:22:23]: Oh, yeah, absolutely. So, yeah. So full, full confession at this moment in life. Like I'm a kid of the Matrix era. Yeah. You know, kind of came up. Matrix era, Minority Report, saw those ex machina. All that stuff was going on.

David Cox [00:22:36]: Right. When I was in my PhD for behavior science starting to work in AI. And I. I mean, I. The idea is incredibly intoxicating. Right. And I think what's also interesting is if you look at a lot of tech companies, arguably they're not to that same full extent, but they're using these same ideas. Right.

David Cox [00:22:56]: How can I make my product more addicting? To keep your eyeballs on it? Yeah. What we're talking about is like, we can do that exact same thing, but to help people live healthier, happier, better lives. You can use it in kind of both ways. But yeah, very. I mean, I think that future, that Minority Report future is coming at some.

Demetrios [00:23:14]: Point, you know, and I, I'm a hundred percent behind you on. I would much rather have what you're doing than me being subject to social media or my own vices and have it just happen to me passively, with or without me knowing it, you know?

David Cox [00:23:36]: Yeah. Oh, absolutely. I really like that. That idea that you kind of mentioned of like, you know, maybe turning some of this system, randomly turn that into a product and give it back to the user to say, hey, connect in what you want, you know, I'm not gonna. And you. I mean, technology's got. We could run stuff on the edge these days. Like, I don't need to collect your data or save it anywhere, but you know, give that back to the user and say connect in whatever you want to get the data in there that you need, state your values and then, you know, just build a model on Demetrius behavior.

David Cox [00:24:06]: Right?

Demetrios [00:24:06]: Yep.

David Cox [00:24:07]: Build a custom model for you and nudge you. Recommend you. There's a whole body of literature called just in time. Adaptive interventions. Basic ideas, you know, at that moment of choice, right before you make some kind of unhealthy decision, you get a nudge or a prompt or something to make you make the healthier alternative. So.

Demetrios [00:24:23]: So basically when I am right about to grab my phone before bed, it's like, oh, remember that?

David Cox [00:24:33]: Yeah.

Demetrios [00:24:34]: You know what I always. I think about a lot as something that I want to start doing is just before I read a book to my kids for bed, I shut off the phone. And just the act of having the phone off and then having to restart it is enough of a barrier for me to say, ah, fuck it, I'll read a book.

David Cox [00:24:57]: Yeah. Yeah. Oh, I love it. Yes. You're. You're already thinking about this idea. If I increase effort or the delay to the reward. Classic examples that we know.

David Cox [00:25:06]: Reduce its value. So, you know what's more immediate? Lower effort. We tend to choose those things. So, yeah. Put the carrots at the front of the fridge with the cake behind it, you know, and I'm still gonna go for the cake. Cause it's just so good.

Demetrios [00:25:20]: And so I'm constantly trying to figure out those efficiencies. And I imagine you start to see areas where you can do more. And. And just from me talking to you, I really appreciate this because I recognize that one thing I want to do and I. It's that do say dissonance in my own life is I constantly tell myself I'm gonna go to bed early, and before I go to sleep, I'm gonna read a few pages and then go to sleep and nail all my sleep score goals. Right?

David Cox [00:25:56]: Oh, yeah. Yeah.

Demetrios [00:25:57]: Be the perfect man. And that only happens when I'm traveling on my own. When it's at home, it's a disaster, usually. And thinking about it in the way of how can I set up more friction is something that I didn't necessarily actively do. It was more that I would do it in a. In a way that I just recognized. Oh, yeah. Like, I don't sleep with my phone in the room.

Demetrios [00:26:29]: I sleep with it outside in my living room. That's one thing that I make sure to do because I recognize that I don't look at it in the morning and I read in the morning. Right. And so doing it the reverse way and just shutting it off before I read my kids a book.

David Cox [00:26:46]: Yeah.

Demetrios [00:26:47]: Is setting that friction consciously and then, like, regaining a bit more of my willpower.

David Cox [00:26:56]: Yeah. Oh, I love it. The other thing I think that's interesting that you said is that it's easier for you to do when you're traveling. So the other thing that I'd be curious about is, like, what happens, like, as you're leading up to the end of the night, that makes tick tock that much more valuable. Like, are you. I don't know, like, you already jacked up and you're like, yeah, now I also need to get some TikTok. Whereas on the road, maybe it's. You're already calmer and so you're like, I'm gonna get into a book.

David Cox [00:27:21]: But yeah, those are. Most behavior is what we call multiply controlled. Right. Dozens or hundreds of things come in to influence it. So it's trying to figure out, you know, what are the main things or the things that I might be able to tweak or tug on to get the behavior I want versus the one I don't.

Demetrios [00:27:37]: But, yeah, I imagine it's not only the behaviors that you can tweak or tug on. It's the ones that if you tweak and tug on them, they're going to have a domino effect in what you end up doing.

David Cox [00:27:52]: Yeah. Yep. Exactly. Right. It's funny you say that. A couple. So I also. I'm faculty at the Endicott College, have some docs.

David Cox [00:28:01]: We're working on a paper right now. This idea, we call it keystone contingencies. If you're familiar with, like, keystone species and like ecology. So the basic idea there is, look at Yellowstone national park, right? You have some kind of ecological system. There's a species within it that if you were to remove them, like the wolf, the whole ecosystem would, like, reorganize. And they learned this the hard way and they reintroduced. Whatever. So we're kind of a bunch of other areas have played around the same idea.

David Cox [00:28:29]: Keystone social people in social networks, keystone actors and corporate America, whatever. We're playing around with this idea of keystone contingencies. Like in your own life, same idea. Right. Is there one behavior or thing that I might change that has this ripple effect throughout the rest of my day? And for me, like, it's running in the morning. Um, if I run in the morning, I drink less alcohol the night before. My diet tends to be better, more focused at work. And it's like that one thing.

David Cox [00:28:55]: If I can tweak that more. A better day. Not optimal yet.

Demetrios [00:29:00]: It's a keystone moment. It's like this pillar. I also think there's a lot to be said for how you end up seeing yourself. Yeah. And like, what you identify as, you identify as a runner now. And. Oh, yeah, it is something like, well, if I'm a runner, I gotta run.

David Cox [00:29:19]: Yeah, yeah. Oh, fair. Yeah, yeah. And if you're a nighttime reader.

Demetrios [00:29:23]: Yeah.

David Cox [00:29:24]: You're gonna have to read before the nighttime.

Demetrios [00:29:25]: Yeah, that. That's exactly it. Like, you end up doing These things because you identify as that.

David Cox [00:29:32]: Yeah, yeah, agree with that. Yeah.

Demetrios [00:29:34]: I can't remember where I heard that, but it was in something. It might have been atomic habits or it might have been some other habits. Book on how it's much easier to create a habit if you identify as that type of.

David Cox [00:29:49]: Oh yeah, sure, whatever.

Demetrios [00:29:50]: Like if you're, if you're a smoker and you identify as a smoker, it's a lot harder to kick the habit of smoking because it's like, yeah, I'm a smoker.

David Cox [00:29:58]: Oh yeah, definitely. And there's some really decent literature suggesting that even kind of like at the same thing, the language that we use can add value to activities or whatever. Um, which kind of fits the same idea. Right. If I call myself a smoker, then it adds more value to the cigarette in addition to the nicotine. As, you know, if I'm a non smoker but I happen to hit it, same nicotine, same stuff going in. But just that bit of language can change the, the reward value, which is kind of crazy to think about. I mean, humans are nuts.

Demetrios [00:30:27]: Yeah. And this is also something that is fascinating to look at through machine learning model.

David Cox [00:30:35]: Oh, absolutely. Incredibly fascinating.

Demetrios [00:30:38]: Are there certain ways or certain sentence structures that for you tend to encapsulate you in certain behaviors? Thinking about the sentence structures and analyzing the way that if somebody comes in and they are speaking in certain ways, is that indicative of certain actions?

David Cox [00:31:00]: Yeah. Oh yeah, definitely. And there's a whole, whole body of research called framing effects that kind of studies this kind of stuff. But yeah, no, and I think the other thing go back to like how AI ML. I'm personally just enamored with unsupervised machine learning. This idea that I don't know what I don't know and it's going to like, tell me stuff like that's just incredibly intoxicating to me. So what I think, kind of going back to this too, where AI would be interesting is, you know, why you come in, just like talk about things that you feel you do well, things you don't do well. Can I analyze that behavior, Understand like your frame of reference, your perspective.

David Cox [00:31:35]: And then, you know, there are some cultures, they have different colors for white that I've never perceived. Right. Just this idea that AI can wrap around language as a, a human phenomena. And then can I bring that into that maybe therapeutic setting and say, hey, here's how you talk and perceive the world. Here's an alternative perspective that you may not have even known existed that may help you like maybe we can get you to frame things, you know, in this different way and then we can again go back to the data. Does this actually change your behavior? Yes. No. Probability of relapse, like all that kind of fun stuff.

Demetrios [00:32:05]: It, it almost feels like. And this is where I'm drifting into the AI hype territory because you gotta be really D or careful as you are doing this. But I know there's a lot of people that will talk to OpenAI's Real Time API or like the voice component. It's very shaky and anybody that's used it knows that it's not amazing. But I see a potential world where you're sending a voice memo and you're just blabbering on.

David Cox [00:32:39]: Yeah.

Demetrios [00:32:40]: With a few different prompts and then that gets analyzed and potentially you can do it with an LLM at first, but then you get more specific and more. Oh yeah, fine tuned models. Fine tuned for you and what you're going through, I guess, if it's needed.

David Cox [00:33:00]: Yeah, absolutely. Have you seen the movie Her?

Demetrios [00:33:03]: Yeah. Oh yeah, yeah.

David Cox [00:33:05]: I feel like that's kind of, you know, it's the thing that's with you that hears your language power. It sees what you see and it can, you know, tap into the collective conscious, but then build models specific for you and kind of what you're after. And then again, you know, from my angle, we can, there's no reason we can't do that and help everybody live happier, healthier lives. Doesn't always have to be. I mean, there's the downside to any technology, you know, if you, if this exists.

Demetrios [00:33:30]: Yeah.

David Cox [00:33:31]: Bad actors will do what they do.

Demetrios [00:33:32]: But how are other ways that you've seen and you like what you're seeing with unsupervised machine learning?

David Cox [00:33:40]: Oh sure. So another kind of similar flavor, but now in the education context, because I work, you know, higher ed, teach classes, things like that, but same idea with students. And I did some work with a company called Glimpse K12 when I was doing my data science postdoc. But same similar idea where I can take patterns of behavior in educational settings, assessment scores, whatever from that identify kids that may need more resources to be successful in a classroom setting. And if you're a teacher or whatever, you have 30 kids. 20 kids, 30 kids. It's hard to give everybody what they need to succeed. And so if you can use any kind of unsupervised machine or any kind of method to identify learner types, student types, whatever to then match kids up with resources so that they have A higher probability of being successful learning the things they need to learn.

David Cox [00:34:30]: Personalized systems of instruction are another big thing. Right. Rather than every kid getting the same per, you know, set of tasks on the chalkboard, you know, Johnny's going to get one set of math problems, you know, Elizabeth's going to get another set. Because they're just different students, different skill.

Demetrios [00:34:45]: Sets, different resources, but yeah, different ways of learning.

David Cox [00:34:49]: Yeah.

Demetrios [00:34:49]: Actually, back when I used to live in Spain, I was teaching and you learn right away that each student is very different in how they learn and what they need to learn. And some are visual, some are auditory, some need to speak and all that. And I, I come back to the idea of we see so many, or we hear about how small groups tend to be the most effective when it comes to teaching, or just one on one. And then you can be very personalized. And so how can we, as you're saying, create modules that are personalized for each individual that are hitting their needs instantly? I think about how do you know, how do you get the data? How do you have the context to know what each individual needs? Right. Because if I'm just coming into school on day one, you don't know that. I like to listen more and I learn through auditory methods.

David Cox [00:36:02]: Yeah. Assessment scores is the, I mean, the only thing that a lot of places have right now where I've seen and even some of my own work where it's been. Most successful online learning platforms. Right. Canvas, some of those places where I have a bunch of student behavior captured in a browser. When are you logging on? What are you engaging in? How long are you reading something interesting when you submit an assignment? What's the quality of that? What grade do you get back? What are the feedback the teacher's giving you? Those kinds of things. But yeah, no, again, this is where kind of going back to the. I know some of the things that I've thrown out maybe in the ML Ops community, love that people are building some of these crazy systems.

David Cox [00:36:41]: But a lot of the stuff we've talked about today are data challenges. And like, how can I start getting the data I need for some of these very important real world problems to then even, I mean, let's, we can just start classic ML. We don't even have to get crazy. Like, let's just do something to help out some of these kids because, you know, a lot of times they, you know, fall through the cracks.

Demetrios [00:37:01]: Yeah.

David Cox [00:37:01]: And you have a tough start in kindergarten. It can cascade through elementary school and maybe hard to catch up, but.

Demetrios [00:37:08]: And it is so true that it's a data challenge in that if you're in a class, you're a teacher in a class with 30 kids. How are you getting the data to know what these folks need in these moments? And so it makes sense that if they're interacting with a website, you can get much richer data. Or if you have them do these assessment tests, that's a great start too. But I feel like, yeah, you gotta constantly be capturing that data.

David Cox [00:37:42]: Oh yeah. And I mean, don't get me wrong, there's some really cool research going on. You know, I put cameras in rooms.

Demetrios [00:37:51]: Oh wow.

David Cox [00:37:52]: Right now I'm tracking what is every kid doing throughout the day, what are they getting exposed to, what's their behavior, all that kind of fun stuff. Look like most kids are on iPads these days. So now imagine I can start integrating data from multiple sources again. I've only seen this research perspective primarily in maybe a hospital, mental health setting, large public spaces. Right. I'm trying to understand where people walk or whatever. So we do have some computer vision technology that I think allows us to at least start playing with these data sets. The only, I've only seen this stuff research wise, never haven't seen a product, right.

David Cox [00:38:28]: Nobody offering, hey, you know, K12 school, buy my cameras and we'll, you know, do this stuff for you. I don't think it's in their budget even if they. Exactly, something like that. Um, but yeah, I think that yeah, technology's there.

Demetrios [00:38:41]: That's gotta be the hardest part is like we've got this great technology to help your students, but it's super advanced and it probably isn't cheap.

David Cox [00:38:52]: Yes, oh exactly, yeah. And that's where you know, I think some of the people in the ML OPS community, they have these skills, work, incredibly smart, talented, working on stuff. And I know half the challenge like we mentioned is can I get the data, can I make it the ROI worth it for a product to be built? And I have to imagine people out there in the, in the community have the answers. It's just, you know, yeah, get that.

Demetrios [00:39:19]: Data or something, get that context. I really like that idea of like how can we get more data to get more context for the kind of stuff that is going to make an impact and it doesn't necessarily need to be context for your LLM context window.

David Cox [00:39:38]: That's right, that's right, yeah, exactly. Yeah, yeah. But I think I actually love you said that because I think if you look at behavior science literature, whether it's a human, whether it's a, you know a frog, an eagle, whatever we understand behavior by understanding that larger context within which it behaves and there's a great analogy with LLMs right the more you the LLM understands hey this is what I need you to do and like here's the information you need better output Same is true with humans right it's just you know we, we talk a lot we love language and so often we default to talking with people you know why do you feel that way? Why are you doing what you're doing? But there are other ways to understand why we do what we do that are often more accurate but more data intensive yeah Requires people like the people in ML Ops community to run those through algorithms to extract the insights and put it in front of a teacher who's not going to be coding in Python Right they just need a dashboard.

Demetrios [00:40:32]: Or something exactly they need that product that end user product exactly yeah yeah.

David Cox [00:40:41]: SA.

+ Read More

Watch More

From Arduinos to LLMs: Exploring the Spectrum of ML
Posted Jun 20, 2023 | Views 761
# LLMs
# TinyML
# Sleek.com
Productionizing AI: How to Think From the End
Posted Mar 04, 2024 | Views 457
# Productionizing AI
# LLMs
# Bainbridge Capital
From Virtualization to AI Integration
Posted Sep 12, 2023 | Views 526
# Virtualization
# AI Integration
# JazzComputing.com