Causal Bandits Podcast
Causal Bandits Podcast with Alex Molak is here to help you learn about causality, causal AI and causal machine learning through the genius of others.
The podcast focuses on causality from a number of different perspectives, finding common grounds between academia and industry, philosophy, theory and practice, and between different schools of thought, and traditions.
Your host, Alex Molak is an entrepreneur, independent researcher and a best-selling author, who decided to travel the world to record conversations with the most interesting minds in causality.
Enjoy and stay causal!
Keywords: Causal AI, Causal Machine Learning, Causality, Causal Inference, Causal Discovery, Machine Learning, AI, Artificial Intelligence
Causal Bandits Podcast
Causal AI & Individual Treatment Effects | Scott Mueller Ep. 20 | CausalBanditsPodcast.com
Can we say something about YOUR personal treatment effect?
The estimation of individual treatment effects is the Holy Grail of personalized medicine.
It's also extremely difficult.
Yet, Scott is not discouraged from studying this topic.
In fact, he quit a pretty successful business to study it.
In a series of papers, Scott describes how combining experimental and observational data can help us understand individual causal effects.
Although this sounds enigmatic to many, the intuition behind this mechanism is simpler than you might think.
In the episode we discuss:
🔹 What made Scott quit a successful business he founded and study causal inference?
🔹 How a false conviction about his own skills helped him learn? 🔹 What are individual treatment effects?
🔹 Can we really say something about individual treatment effects?
Ready to dive in?
About The Guest
Scott Mueller is a researcher and a PhD candidate in causal modeling at UCLA, supervised by Prof. Judea Pearl. He's a serial entrepreneur and the founder of UCode, a coding school for kids. His current research focuses on the estimation of individual treatment effects and their bounds. He works under the supervision of professor Judea Pearl.
Connect with Scott:
- Scott on Twitter/X
- Scott's webpage
About The Host
Aleksander (Alex) Molak is an independent machine learning researcher, educator, entrepreneur and a best-selling author in the area of causality.
Connect with Alex:
- Alex on the Internet
Should we build the Causal Experts Network?
Share your thoughts in the survey
Causal Bandits Podcast
Causal AI || Causal Machine Learning || Causal Inference & Discovery
Web: https://causalbanditspodcast.com
Connect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/
Join Causal Python Weekly: https://causalpython.io
The Causal Book: https://amzn.to/3QhsRz4
Scott Mueller: But when I read book of why that sort of changed things for me, science is the amalgamation of theories that have survived rigorous falsification. You know, sometimes you do want to stop. You do, you do want to be derailed because you really are headed in a wrong direction. One common theme that I've had in my life is certain times in my life.
I thought I was amazing at something, whatever it is, and turns out I'm really bad at it. And that uh kind of woke me up. I won't stop until
Callum: Hey causal bandits. Welcome to the causal bandits podcast the best podcast on causality and machine learning on the internet
Jessie: today We're traveling back to los angeles to meet our guest During his high school years.
He was a pretty damn good tennis player But his love for computers was stronger than his love of tennis After college he rejected an offer from a big tech company to join a startup Later on he founded a successful business He quitted to devote his full attention to studying causality. Ladies and gentlemen, please welcome Mr.
Scott Mueller. Lemme pass it to your host, Alex Molac.
Alex: Welcome to the podcast, Scott. Thank you. How are you today? I'm good. How are you? Very good. The weather is sweet today. Conrad Lawrence, who is considered the founding father of ethology, looking and examining animal behavior, once said that thinking is acting is an imaginary space.
Majority of your current work is work about counterfactual thinking, so called rank three of Perlian ladder of causation. When you think about this quote and your work. What are your thoughts?
Scott Mueller: Oh, it's a great quote. I heard it for the first time recently in Bernard Skolkoff's, uh, keynote presentation at Clear.
I love it because on one hand, the quote speaks to how we estimate counterfactual probabilities and estimates, um, because we are, uh, imagining, uh, hypothetical worlds and we're, we're building causal models for those worlds. Of course, we can't really experience those worlds, run experiments in those worlds, act in those worlds, but we can estimate things that are happening in those worlds based on what's happening.
In our world. So from that perspective, it's a great quote, but also the, the more philosophical or metaphysical ideas behind it of, uh, when we think about something, we are running that through in our minds. And in this imagined space that we, uh, were performing those actions, no matter how, how strange or outside of reality, those, those scenarios are in a way, it sort of bridges the causal models that I think about frequently for counterfactual reasoning and our everyday thoughts, how we think about things.
Alex: What are the main challenges that you face in your work today?
Scott Mueller: My work today is, um, the, the direction of my research is on being able to make better decisions as, at an individual level, um, but also at the level of having policies that are effective, uh, for a large population, but making better decisions, uh, based on counterfactual reasoning.
You know, counterfactual reasoning hits upon this fundamental problem of causal inference, where we can't observe the outcomes for two different treatments. And so we can't have somebody take a medicine, And then go back in time and then have them not take the medicine and see what's happening at an individual level.
We can't do that. Of course, we can't do that really at a population level, but it's, but we can estimate that pretty in a pretty straightforward way at a population level and get those averages. The problem with this fundamental problem of causal inference is that makes it difficult to get point estimates or identify these.
Uh, probabilities. A while ago, Judea Pearl, uh, came up with these bounds on probabilities of causation, particular counterfactual probabilities that are interesting, probability of necessity, probability of sufficiency and the probability of necessity and sufficiency. And there's some others that are pretty interesting as well.
Those bounds. They, even though they were proven tight, meaning we can't do any better than those bounds. Mathematically, we can never do any better without further assumptions. Those bounds are often too loose to base some decisions off of. Too loose to be useful. And so the challenge is how do we narrow those bounds?
Ideally, we narrow them so much they become identified or point identified, but at least if we can narrow them down sometimes, uh, to a sufficient degree that allows us to understand underlying data generating processes better or, or make better decisions. And so, so that's the challenge. Now I can't do any better.
Then, then what they've done, uh, they've proven that, um, but we can't think about what assumptions, what additional assumptions we can make that would allow us to identify these counterfactual probabilities or, or narrow those bounds significantly. And so, you know, there are a lot of assumptions you can make to do that, but which assumptions are most reasonable?
And which assumptions make sense for a particular scenario.
Alex: In one of your papers, you have shown that if we can afford to make so called the monotonicity assumption, this can be very helpful. Can you share with our audience what this assumption is?
Scott Mueller: Basically monotonicity says that there's no possibility of harm and harm in the counterfactual sense.
So, as a concrete example, or semi concrete example, you, uh, you can take some medicine And hopefully you get better from your illness. Harm would be that if you take the medicine, that some toxic element of the medicine prevents you from getting better. But if you had not taken the medicine, you would have gotten better on your own.
Your own immune system would have taken care of the, the illness.
Alex: There's a negative counterfactual effect. Yes. At an individual level. At an individual level.
Scott Mueller: Right. And if that is, if the probability of that happening is zero, If that's not possible, then, and in, for certain scenarios that just definitionally is impossible.
For other scenarios, we may know the underlying biological mechanisms of this medicine and we know that this can't happen, then we have monotonicity. And once you have monotonicity, you can do a lot of other things, particularly there you can, now you can point identify probability of benefit, probability of harm or probability of harm is zero, but other counterfactual probabilities of that nature, probability of necessity, probability of sufficiency.
Alex: What is the main challenge you think? That causes those methods not to be applied in practice more broadly. The
Scott Mueller: methods of assuming monotonicity and then identifying these? Yeah, identifying counterfactual quantities, yeah. So for one, uh, how do you know you have monotony? Maybe you don't have monotonicity.
And so assuming that would be a problem. problem because, uh, you're going to get skewed results. Number two, maybe you do have monotonicity, but how do you know? And this is the subject of a paper I'm working on now. I presented it at, uh, Rand Institute held a conference. So I, I presented, um, part of that paper, but I'm, I've been improving that paper and almost done with it along with, uh, Judea Pearl.
And we show how, um, you sometimes you can, from the data show that you do have monotonicity. That's pretty rare to be able to show that you do have monotonously from the data alone. What is much more feasible is sometimes you can show from the data that you definitely do not have mono necessity.
Alex: Mm-Hmm.
So there is a relatively good test Yes. That we can use in order to say for sure we don't have mono. That's, and there's a weaker test that can tell us that we have mono ethnicity, but it's. Only in a subset of conditions.
Scott Mueller: Yes. Yeah. So it's, uh, in a very particular situations that it'll be pretty rare to apply.
Uh, you can confirm that you have monotonicity, but disproving it is certainly a lot easier as, as we all know, in general, disproving things in, uh, in science is, uh, is far easier. And then further, you know, we say, okay, maybe you don't have monotonicity. But how much is it violated? And so we write about how, what are the limits to monotonicity in your particular situation?
And then, even though we might not be able to identify the counterfactual probabilities, we can narrow those bounds further, because we know there's a limit to the violation of monotonicity.
Alex: Mm
hmm. So it's like putting a bound on the bound. Yeah. Is that
correct?
Scott Mueller: Yes. Uh, putting a bound on monotonicity, which puts a bound On the probability.
Yes.
Alex: Yeah. Yes. Yeah, I always, I always love, you know, all those ideas, all those creative ideas that we produce in the causal community, because here we are dealing with, we are basically on the verge of human knowledge in a sense. Yeah. Yeah. And everything that we can apply to, I want to say verify, but actually I should say falsify those models is the same set of tools that we have in science in general.
So in this sense, causality or causal modeling could be nicknamed scientific modeling. And it's, I agree. It's basically the same thing.
Scott Mueller: Yeah. I mean, so often in science, we're asking that question of what caused this and what What are we looking at? What are the consequences of this? What are the effects, the effects of the causes and the causes of the effects?
Alex: There's another quote that you shared with me before we met for, for the recording today that comes from, uh, Marcos Lopez de Prado, who works in finance. Would you like to share this quote as well?
Scott Mueller: Uh, yeah. So, well, so I read it in his book, uh, causal factor investing book. I think he quoted it from somebody else.
Um, I don't know who the original, I didn't pay attention to who the original. Author of that quote was, but, uh, it's that science is the amalgamation of theories that have survived rigorous falsification. Hey, I love that. We had a discussion about it before, but I love the simplicity of that statement.
What science boils down to. And you know, when you think. How amazing the world is we live in now because of all of the scientific advancements so many great people Before us came up with throughout the centuries and in modern times. It just is accelerating. It's just amazing And so, where is all this knowledge coming from?
How do we know it's really true? And we think, oh, well, you know, there's been all these experiments and we've, you know, mathematically shown various things to be the case and, and we've proven this and not proven that and, and all that. But really, it's very, very hard in science to prove things. It's much easier to disprove things.
And so our knowledge mostly is, is, uh, based on all of these theories that we've come up with that we've tried to disprove, but they have survived. And if they have survived all that, uh, all the attention to try to disprove them, then, uh, we, our confidence raises. What's really going on in the universe?
Alex: It's really great this aspect of survival Also brings to my mind, you know this evolutionary dynamics that we have in science and When I was listening to you now, I just recalled one of my conversations with with a person that applies causality in practice. And when I was talking with them about model evaluation, you know, they told me like, Hey, we start with something, you know, then we check the consequences of this model, you know, like predictions of this model against reality.
Maybe we do some intervention. We see if the model is generating the data that is similar to what we get from the intervention. And so this is also this, this iterative process is basically an evolutionary. Process. And I heard very similar things from other practitioners. And I also. Use the same approach, like an iterative approach in, in work with my, my clients.
What do you think we could do in order to make Your findings from your work more accessible to practitioners today?
Scott Mueller: Yeah. Yeah. Great question. Uh, because That's ultimately what I, I hope that I'm able to contribute value to people. And, uh, and if I just write papers that nobody reads, then that's not, then I haven't really done anything, even if I, uh, I, I'm lucky enough to, to have some valuable content there.
Or some valuable discoveries.
Alex: I know at least three people who are reading your papers. Okay. Yeah. That's great. But I think that's just a very small subset of people who are reading your papers.
Scott Mueller: I'm, I'm eager to provide value to this amazing community. World that we are, we are living in, in, in modern times, any which way I can, I'm, I'm, uh, I'm excited about that.
Uh, so first I have to actually do something valuable and, uh, and then I have to have people actually, uh, use it and benefit from it. So aside from publishing and going to conferences and speaking and, and, and that kind of thing, there's a, and my background is actually software development. In computer science, so I hope to develop, uh, some applications and software frameworks contribute to, uh, projects like do why, uh, which I've recently talked with, thanks to you recently talked with some of the folks over there, I'll be, uh, I'll be adding to their open source library, some of the, uh, some of the theorems and formulas that I've, I've come up with.
Alex: That's really great to hear. I'm super excited about this. And I think there's a lot of great theoretical work in causality. And I think that's a great opportunity today to start translating this, this, uh, work into convenient, um, software packages that people can easily use. Yep. So something, you know, that has an API that looks familiar, like I think do why is doing.
An excellent job with this, uh, using a very similar approach to what psychic learn does for general machine learning, building a consistent API for different, uh, different models, different approaches and so on and so on. So that makes it easier for people who don't necessarily have time to implement something from scratch, right, or to go through, I don't know, 10 different libraries.
It makes it easier for them to apply this, and it increases the probability that they can actually, Make this stuff running in production.
Scott Mueller: I think, yeah, so that's part of it. That would be, uh, that would be amazing if, uh, and, and I know some other, so I know that's a popular project and I think it's growing in popularity and I think you're, you're right that, that do why may be the scikit learn of, uh, causal inference.
And so that benefits software developers. It's a Python framework. Um, but I also hope that some software developers will use that as a base for even more general, uh, approaches to things. So maybe, you know, building a nice GUI on top of it where you You know, you can be an expert in a particular field in, in econometrics or biology or, or something else.
And you don't need to be an expert in software development or in some aspects of causal inference yet still, uh, make use of this at an even higher level.
Alex: What do you mean? It's like building interfaces that are accessible to people who maybe don't necessarily code, but they have domain expertise they could
use.
Scott Mueller: So the interfaces could be, you know, a drag and drop, uh, you know, click here and that kind of thing, or it could be an integration into an existing. I don't know too much about econometrics or, or some of the other fields. Maybe, maybe epidemiologists, I don't know, do epidemiologists have this nice software package that allows them to model things, but maybe it's an integration, if they do, maybe it's an integration into some, you know, popular software package.
So they get, they sort of get it for free.
Alex: Yeah, I think, you know, there are many different fields. I mean, We have people in marketing. Yeah. Marketing is vividly interested in causal inference and not all of them have expertise to code themselves.
Scott Mueller: Right. So marketing, I did a lot of internet marketing when I was doing internet businesses and I was not a marketing expert.
Um, and I worked with some marketing experts, um, and I tried to learn, um, and one, one popular thing that pretty much every internet marketing software package has, and even built into something like Google AdWords is A B testing, you know, some of the work that I've, and, uh, some of the work, uh, that other colleagues of mine have done Ang Lee is, uh, did this, you know, he started this, uh, this great framework for unit selection is really valuable.
Uh, to marketers and it becomes clear that a B testing the way the, the, the, there's a huge industry around where you have multiple different kinds of ads that can go out to the same target audience and they can, you know, split randomly who gets to see which ad. And then from there you get to understand, uh, as the campaign is running, which is better performing on average.
Um, that can be severely suboptimal and this is a core thing that's built into so many marketing platforms, but it, but with something like do I, and with some of the, uh, some of the work that, that myself and other colleagues I've worked on, if that can be integrated into these marketing platforms. You know, they can sort of get, get a lot of this for free, maybe they can specify.
So the unit selection framework is such that you may specify certain extra attributes that you don't need to specify with AB testing. What is the value to you of advertising to somebody who will respond favorably to your ad? What is the negative value to you for advertising to somebody who would be sort of harmed by the ad in the sense that they will not buy your product.
If they see your ad. But they would buy your product if they don't see your ad. And so, um, so you, you know, you might have to update the interface a little bit, but essentially you get a lot of this counterfactual reasoning. You can for low cost, low cost, uh, cognitively.
Alex: Scott, for those people in our audience who are not familiar with the unit selection framework, could you give us a short definition?
Uh,
Scott Mueller: the unit selection framework was created by Ang Lee and Judea Pearl. Ang Lee was a PhD student of Judea's and is now a professor at Florida State University. And, and he wrote this great paper on unit selection that allows you to, to specify the weight or value you would place on four different types of responders.
And we can talk about this in terms of a marketing example. And so the treatment is that you show somebody an ad and the, uh, the response is they buy your product. And so the four different responders are first a complier. That means that if you show them an ad, they will buy the product. However, that same person, if you don't show them the ad, they will not buy your product.
Then there's the always taker who will buy your product. Whether or not you show them the ad, never taker, they will not buy your product. Whether or not you show them the ad and then there's the one you really don't want to advertise to which is the defier and If you don't show them the ad they're gonna buy your product because they need it or because they want it and then they know About it, but if you show them your ad they are not going to buy your product maybe the ad is offensive to them, or maybe it reminds them of something and maybe it It allows them to think about it more and wonder if there's a better option for whatever reason.
The defier is the one that, that you will turn away from showing the ad. So you have these four responders, four response types, and you place a value on each one. Clearly you want to advertise to the compliers. Those are the ones that will be effective for your advertisement. And, uh, and, and although the never takers and the always takers, it's okay to advertise to them because it doesn't really matter whether or not you advertise to them in general.
I mean, to me, and this is a separate, you know, this is sort of some of the extensions to this framework that, that I and Ang Lee and others have come up with. In general, it doesn't matter, but it might matter a little bit because the advertisement It costs money. Maybe there's a fee for each email sent out, or maybe there's a physical package you send as part of the promotion.
Maybe it's a discount. And so they would have a negative value or at best is zero value typically. And then the defier might have a pretty strong negative value. Yeah. The defier might have a more negative value than a complier has a positive value. And there's where the nuances of your particular scenario or your particular business come into play.
You know, how much do you value these and how does that play into your strategy? What he showed and what we would later write papers together on and individually is that Uh, the typical way for advertising, uh, you know, continuing on this example, the typical way that we know how to advertise better is through a B testing, you know, what messages work and don't work.
We send out two messages or two advertisements or two packages or whatever the advertisement is to the same group, but we randomly pick. Who gets which treatment? Yeah. And it might not be two, it might be five or 10 or whatever. And so we send out multiple advertisements. And from there, it's sort of like an randomized control trial.
And so we can see who's getting better and there's optimization techniques for, okay, but in this particular batch of advertisements and, and this target demographic or target web pages or emails or physical addresses, you want to send out most of the advertisements of the one that has shown. Be on average, uh, uh, most conducive to sales products sold, but that can be severely sub optimal for your particular scenario or business or strategy.
And that's shown in, in these papers. using this, this approach to unit selection, and it's pretty easy to apply the unit selection, even though it's based on counterfactual reasoning and counterfactual probabilities, these response types, they're all individual level of treatment effects. And we know that, you know, you can't identify those, but if you plug in observational data.
And hopefully, uh, the combined experimental data, maybe you only have, uh, experimental data. So you plug, you know, you can plug in only experimental data too. We can give you bounds on these, uh, the overall value. So maybe advertising to this particular set of units. Uh, or, or subpopulation gives you a value of between negative 20 and positive 20.
And then it's like, okay, oh, well, you know, I don't know. But advertising to this other sub population gives you a value of between 30 and 35 positive. So it's good. So choose the second one.
Alex: That's sufficient to make a better decision. Yes. Scott, you started with computer science, then you moved to business.
Yes. Okay. And then you decided to go and study causal inference. What's the unifying thread or unifying perspective over all of those years and those experiences?
Scott Mueller: Great question. And I certainly didn't have a unifying strategy in my mind of how my life was going to unfold from college on. In college, I studied computer science, uh, as an undergraduate.
Uh, also. As for my master's and PhD, but that that came later. Uh, but at that time, um, I finished with a bachelor's degree in computer science and I loved it. I loved computer science. I loved the mathematical nature of it. Of course, as you're learning computer science in college, there is some programming that you learn as well.
And I love the programming. I love creating things. And, you know, when I graduated. There was a thought in my mind of, should I stay and get a bachelor's degree or, or, or, or a PhD or, or, um, something in academia or, uh, go in, into the workforce. And for me, the tech world was really exciting. I had friends that were doing very well in the tech world, and they were convincing me to join them in various companies or start new things.
And so that thought of staying in school was very low preference or priority. Um, so I, uh, I went into the tech world. As a software developer, and, and those were really exciting times for me, you know, the, the prospect of a lot of people using my software of making a lot of money of doing important things, all that, and the whole excitement of the industry was, uh, intoxicating for me, eventually.
I wanted to start my own companies and, uh, one in particular, which led to a series of other startups that I, that I founded or co founded or, or joined very early on. So that brought me from software development into more of a Business type mindset and less actual hands on software development, though. I never quite lost that.
I love it I love software development even to this day. I do very little software development now really a small amount but to me, that's that's fun Uh, I I have uh I, I was a TA for a professor, um, and a really phenomenal professor at UCLA, and, and he still does software development, and he talks about it as a therapeutic for him, and I can completely see that.
You know, for me, if, you know, there's an option, Saturday night, go out to the movies or hang out with friends or go to the, yeah, go to the beach during the day or whatever, Or sit in my room and, and develop software, sit in my room developing softwares, a lot more fun. So.
Alex: Even in California.
Scott Mueller: Yeah. Yeah.
Wherever. Right. California has great beaches and all that. But that, uh, that pales in comparison in my mind to, uh, to the fun that can be had creating something that never existed before through, uh, through code. So then I started a company. That first company ended up not working out. Uh, but I have just great memories of it.
We raised, we raised some money from it for it. Uh, we built it up to 50 employees about, um, at, at its peak. Uh, and then at the end, uh, had to let, I left them go parts at a time until we had none left, but, uh, you know, and then I, and then I went through a series of companies. Um, but even the failures, of course the successes were, were really fun.
Um, you know, now I'm making actual money and that was. The money, of course, is exciting because, you know, you, I want to buy, I want to have a life that, uh, that I don't have to worry about money, number one, but, uh, also have, have the luxuries and of life that money entails. And here I'm starting to do that.
And so that's, that's really exciting. But even the failures, I just have really great memories of, you know, trying to build something great with, with people that you become very close with. Um, that whole experience was wonderful. Maybe in my mind, maybe my brain is not, uh, remembering the bad times so much or placing, placing a low emphasis on that.
And so I, I only remember, um, mostly the, uh, the good times. All along, you know, I do, I always had this sort of deep hidden, uh, desire to go back to school. You know, because I think. You know, even as exciting as tech was, the tech business, the tech world, uh, and even as much as I enjoyed kind of the practical nature of software and software development, in my mind, I felt like my brain is less well suited for that.
And more well suited for the kind of more mathematical academic, uh, type of, uh, thinking that that's part of computer science. I think I always had this, uh, desire and they're sort of nagging at me, uh, to go back to school. And while I was at my previous company, Prior to going back to school, U Code. I loved it.
I was teaching, we were teaching, I mean, the company that I created was, was formed to teach kids, uh, computer science. You know, we had the opportunity for some kids to go really, really deep in the computer science. And I loved it, uh, many levels. While I was there, still around today, but while I was there, we taught over 10, 000 students.
Kids, and we made some pretty big impacts on their lives. So that, that was a very fulfilling company in many ways. But at the same time, while I was at the company, AI had really started to have an impact in the world. And impact on companies, people started to become much more interested in it because it was, it was showing, it was, it was becoming obvious how, how useful it was.
And for me, that was super exciting because, you know, the dream to get to artificial general intelligence, the super intelligence, human level AI, that to me was, that's going to change the world. Even if, if we can get close to that, clone a trillion copies. I mean, there are practical limitations. Um, and so, you know, That's not necessarily going to be the case, but you can imagine cloning a trillion copies of this almost human, uh, AI in some ways far surpassing human capabilities in some ways, not quite there, but, uh, but close.
And then, hey, you guys, you trillion AIs, uh, AI people, agents, go work on cancer or go work on this particular problem we have in the world and solve it. Those dreams, those ideas were too much for me to not, not want to do something about, not want to be part of, not want to hopefully contribute to. And, and so there it became, okay, you know, I always thought about going back to school.
I always felt like one day I need to get my, you know. PhD or at least, at least master's degree, you know, and, and I think my brain is, is better suited to that, even though I had some success in tech, then even though I did, you know, the practical software development was fun for me. I still felt like, um, I belong maybe more in an academic environment or research environment.
So I went back to school, um, I, for AI, but during the process, actually, when I was applying to schools, Applying to graduate programs. I, um, I came across book of why I bought, I went on Amazon and I thought, okay, I don't, you know, AI is super exciting. And the, the ideas of what AI could do, you know, I was constantly dreaming about that and thinking about it, but I didn't know the, uh, How to, uh, how to contribute to it, or even I tend to contribute to it even way before that I didn't even know, uh, I knew I'd gone through the TensorFlow tutorial online, not a whole lot beyond that.
So I, I didn't know, um, the details at all. So I bought, I went on Amazon, I bought a ton of books on everything related to AI at a very high level and some, you know, at a deeper level. And my plan was just go deeper and deeper. And, uh, and figure out where my expertise, where I want my expertise to be headed towards.
Uh, but as part of that, one of the books that I bought was a book of why, uh, by Judea Pearl. And that really caught my attention. I started reading that, uh, or I bought the book. I think, uh, I had been accepted to USC and I was, I had committed, I had, you know, paid the deposit or whatever. And, uh, the first classes had not started yet though.
This was just post COVID. you know, figuring out of how they're going to do the classes online and that kind of thing anyway. But, but when I read a book of why that, that sort of changed things for me, I, I just thought this will go back to a common, the US have a common theme of, of software development business and, uh, and go back to school.
One common theme that I've had in my life is certain things, certain times in my life, I thought I was. amazing at something, whatever it is, and turns out I'm really bad at it. And that kind of woke me up in various times in my life and, and made me think, wow, I, I, this is a surprise. Here I was with an ego about, about this particular thing.
And that was a totally unwarranted confidence that I had in my life would change around that. And here was an instance where I thought I was good. Uh, you know, sort of the business statistics, uh, you know, the, the high level statistics, looking at data and coming to some good conclusions about it, interpreting it, uh, well.
And I read book of why, and I just realized I am really, uh, novice at that. And, and, and there's no question I have made poor decisions in the past based on data when I thought I had made great decisions. That was sort of not only surprising to me and, and made me think this is something I cannot go further in my life without, without knowing better, but also I felt that this was a sort of fundamental to human, human reasoning, human thinking, uh, thinking causally, thinking counterfactually.
And, uh, and we, you know, and we know that, that even as babies. Uh, there have been experiments on this. I think, uh, Daniel Kahneman wrote in Thinking Fast and Slow, these experiments with these toddlers, you know, you remove a brick or a block, uh, at the bottom, um, they expect it to fall, but you remove a block.
But the other blocks have been glued together, um, and they don't fall, uh, they don't fall. The babies, uh, recognizes. Surprised. Yeah, yeah. Yeah. And if we want to get to human level AI. Uh, to me, in my mind, this was, this was a core thing that we had to bake in to our machine learning models or architectures.
Alex: What do you think we are missing the most today in the approach to, to modeling in order to move forward towards this vision of. You said like generally intelligent agents.
Scott Mueller: Yeah. Yeah. AGI, superintelligence, uh, the much smarter AIs than we have today to solve really significant problems. I don't want to say too much because my focus has been now on causality and causal inference, not on AI.
So I'm, although I've, I've certainly played with chat GPT and, and LLMs and, and various other machine learning architectures and AI models symbolic to, uh, but just played with them. Um, so, uh, you know, I know there are some people that think that we just need to increase the size of these models, the number of parameters and, you know, and some, you know, architectural tweaks.
And we will get there, that these properties will emerge, that causality will emerge. And I haven't seen that, uh, but what do I know? Maybe, maybe that will happen. I know the idea of predicting the next word. Uh, although that sounds very primitive and basic, uh, the idea is, well, we're taking the whole universe into account.
And so, uh, you know, eventually these models will take it, the whole universe into account in such a way that there's very powerful reasoning going on. I haven't seen any of that yet. My feeling is still that causality, the mathematics and the science behind causality, that, that we as, you know, humans throughout the, this century and, and even previous centuries that have done work on this, that needs to be baked in to the, uh, to these architectures.
Um, that was my thought very early getting into causality. That thought remains until I've, until I, I see otherwise. Mm hmm. I'm pretty excited about going in that direction. Um, you know, right now I have not, I've been developing or trying to develop my expertise in causal inference. And then, uh, the hope is that I can apply that to, to these architectures.
I know some work is already going on in that, in that vein.
Alex: What are
Scott Mueller: you working on currently now? The general direction of my research is on better decision making. Personalized decision making, policy based decision making, any, any kind of decision making. Because that fundamentally, it has to do with these counterfactual estimates, counterfactual probabilities, reasoning.
And the problem with this counterfactual reasoning is, it's that fundamental problem, causal inference, that we cannot observe different outcomes for different treatments at the individual level. We can't give you a medicine and see if you get better from your illness, go back in time, not give you the medicine, and see what happens.
We can do that at a population level, at an average, we can estimate that, we can do these randomized clinical trials, controlled trials, we can do that kind of thing. But because of this fundamental problem of causal inference, often we cannot identify these counterfactual estimates of probability. Yes.
And, uh, and because of that, we're limited on some of the decision making we, uh, we could do that would be, uh, more optimal than, than we can do without. And so, uh, so a lot of my research is figuring out how can we identify these counterfactual probabilities, and if we can't identify it, how can we, uh, Get bounds that are narrow enough for us to make, uh, good decisions off of.
Alex: Scott, what are one or two things that you learned while running your, your business or your businesses that help you today when you work with causality and you think about causality? I can't
Scott Mueller: say that there was anything I learned in business that applies directly to, you know, coming up with theorems or.
mathematical formula for computing counterfactual probabilities, anything like that. But, uh, but what I can say is that there were many decisions, uh, some of which were important in business. And I thought I had made a very good decisions based on the information that was available to me at that time. It was the exciting thing for me when I read, exciting and disheartening in some way, when I read book of why that I realized that, um, I was not interpreting data always properly, that I was not, there's no way I always made good decisions based on that data, on the data that I had or the information that I had of the scenario.
And so now, when I, when I think about counterfactual probabilities in particular, where I'm trying to come up with reasonable assumptions. That's somebody could make about a scenario that would allow me to narrow the bounds on probabilities of causation or identify counterfactual probabilities. I think some sense what a reasonable assumption is.
I can look at some scenario and think, Oh, Yeah, that assumption, I could make certain situations in business and wow, that would have really helped. Or I could have made a great, you know, I, I could have made a great assessment analysis of the situation and done things, taken actions, made decisions that would have really benefited the business.
So there's a bit of a, you know, motivation, excitement in, in that, but also hopefully some value that that gives me in, in coming up with assumptions that are more useful. To others,
Alex: who would you like to thank?
Scott Mueller: Well, you know, I, I think I've been very fortunate in my life that there have been many people, um, that have had a, a huge positive influence in my life that have really just, uh, given me so much value.
So I'm gonna limit this to not including any of my, uh, family . Um, because I'll, I'll be here for a while and, and bore you and bore anybody listening. Um. And to just people that have, maybe two people that are coming to my mind right now that have, that have been just a huge mentor, role model, positive influence on, on my education and my, the development of my growing expertise.
Number one, of course, is Judea Pearl, my PhD advisor. I, uh, I cannot believe even right now how lucky I am that, uh, that I know him, that I get to work with him, that, uh, that he is a mentor to me. I feel like Judea is going to go down in history. as one of the great minds at the level of Isaac Newton, Albert Einstein.
I think already he's, he's certainly in some circles, people have that level of respect and reverence towards him. But I think that'll grow. As, as some of the foundational things that he's, uh, he's done in his life scientifically become a bigger and bigger part of, of the future of, of science and, and, and especially AI.
But, uh, but aside from that, um, you know, he's just such an amazing human being. Uh, everything that he has gone through in his life and, and, and, you know, every phone call, every visit, every meeting that I have with him. It is not just, uh, learning, you know, some of that incredible thinking and knowledge that he has.
It's, uh, and you've got to know him a tiny bit. Um, he's extremely entertaining individual. I, there are a lot of things I want to say right now, but I feel like I'm going to, I'm going to, uh, I'm going to bore you and others, uh, by extending this, uh, this, uh, to be quite lengthy. He's just an incredible human being and, and, and so I absolutely, uh, cannot thank him enough for, for the time he has given me.
There's one other guy, uh, Klaus Schauser. He was a, uh, computer science professor of mine, and when I got my, uh, in my undergraduate days, which were now quite a while ago, he invited me out to lunch. Not just me, um, invited anybody who wanted to come out to lunch. I remember, um, during his class, uh, he was starting a company.
And, you know, and he was just entrepreneurial and, you know, he was into the, the, the growing tech world. And, uh, and he just wanted to, you know, talk to us. Um, I don't know if many other people showed up, but I got to have like a one on one meeting with him. And we developed a, uh, friendship or relationship.
Um, but I have kept in touch with him over the years. He would end up, uh, investing in two of my businesses and, uh, and he became extraordinarily successful. He founded a company, go to my PC, which was bought by Citrix. And he founded Appfolio, which is done extraordinarily well. He's been extremely successful.
And this is another, uh, just incredible human being. Um, I, uh, you know, I can't thank him enough for, uh, for, for the, you know, the, the time, the, the resources he's given me, the, uh, the, the introductions he's given me. I mean, he's, he's just done so much for me. Um, that has been quite a rewarding relationship that I've had done and the mentoring he's done for me.
Um, I don't want to go off on too many tangents, but I do want to say this, uh, this, this one, this one thing with Klaus, um, uh, one of my early startups that I had, he was an investor in and, and he was, he was the first investor and co founder really with me. And uh, I would get pretty excited, you know, I was young and, uh, um, And I was, you know, thinking certain ideas I had were just gold and they were gonna be Momentous and and he would say well, you know, yeah, that sounds good.
That's a that's a great hypothesis and I would think This
is
uh, this is this is gonna be great and and We read an article together. Uh, the article had talked about irrational exuberance and that a lot of these, uh, very successful tech entrepreneurs were irrationally exuberant about their, their ideas.
And he would remind me that I'm being irrationally exuberant, but that's okay. You need to be irrationally exuberant to succeed sometimes. And, and that can be a positive quality. There are going to be tough times. You need to be motivated and passionate enough that that's not going to deter you or derail you.
Um, and you know, you need to have that, uh, that passion as you grow the company and as other people need to have that same vision that you have. Um, but you know, a lot of great lessons. He has given me a lot, a lot of, uh, a lot of things to, to both Klaus and, uh, Judea.
Alex: What would be your advice for people who are just starting something?
Starting is something that is challenging or complex. Maybe people who want to enter causality or general general machine learning or people who want to start their own company in tech or or outside it. What would be your advice for them to keep going? When they meet all those challenges on their way to be, let it be intellectual challenges or business challenges.
Scott Mueller: I've been given a lot of great advice and I feel like I have knowledge, but I don't know that I have, have a lot to, uh, a lot of value to give to somebody else in this respect because, you know, sometimes you do want to stop. You do want to be derailed, um, because you really are headed in a wrong direction.
And, uh, and you know, you certainly have to. No, when that's the case versus, um, no, you need to stick through this and this is just a speed bump and, uh, and, and, you know, the, the great things will await you if you, uh, persevere, um, you know, I can't make that judgment, uh, for anybody else I can say that. Um, in business, you know, this is an example of the, the kinds of decision made decisions that you need to make, um, whether to, whether to push forward, whether to pivot.
Whether to make a tiny pivot, whether to re imagine your whole business or, or exit entirely. These are big decisions and, and decision making is, is, is an area that's really interesting to me, especially decisions that, that are benefited from counterfactual analysis. So my advice, if somebody's. Uh, how do you push through or what do you do in the face of obstacles in business?
You know, that's a, that's a decision that you need to make. And my advice for decision making, learn counterfactual analysis and, and, and hopefully I'll see There are other authors, there are other researchers that are writing interesting books and papers on the subject and, you know, read them. Hopefully, I can contribute to this growing body of knowledge and you can read my stuff.
As in academia, I think things maybe, maybe are a little different. I mean, clearly, it's the same in the sense of, you know, sometimes you are struggling with an idea, But that's not a good idea or that's not a good path to solve this problem and you need to recognize that and, and stop struggling and, and do something else.
Um, but I guess, uh, you know, maybe it's a little different from business in that those can be really great. opportunities. You know, these hard problems are the kinds of problems that are very interesting to solve. You know, especially the high impact, uh, hard problems. For me personally, that's pretty exciting.
Not that I have, you know, You know, superpowers to solve, uh, solve problems. But, but that's fun, you know? And these are the puzzles.
Alex: What keeps you going when you encounter something like this on your, on your way?
Scott Mueller: I don't know. I, I have a good answer because, uh, because it's not that, it's not that I, uh, I tell myself, oh, Scott, you've gotta have conviction.
This looks like too hard of a problem. But, uh, but no, you've, uh, you know, you gotta persevere. Anyway, I'll say a couple things about this. Um, I, I don't know that I've ever had that, uh, situation where the problem is too hard. Um, maybe because my ego won't tell me that, uh, Hey, you're not, you're not capable of solving this.
Or maybe it's because, uh, I haven't taken on too hard of a problem. And, uh, and so even, I mean, I certainly have problems and I'm working on things right now, which are, which the solution is not evident yet. Um, but I believe the solution is there. Yeah. And, uh, and so if I believe a solution is there, my mind is, I got to find it.
And I'm, the way my brain works and maybe the way it wasn't so, it wasn't as good for business. Uh, despite some successes and maybe it's better for academia is, you know, I won't stop until, until I find it or until I recognize like, okay, it seemed like it was evident, uh, but no, you know, I've spent a lot of time on this and either it's not possible or I'm not the man to, uh, To find it.
I think I try to find problems that I feel confident I can solve.
I don't really have a, uh, a discipline or recipe for sticking with problems that are too hard, either because I just haven't given my allowed myself to pursue those kinds of problems. I look for, I look for problems that I can solve. Or, or, you know, or Judeo call me up and say, Hey, what do you think about this?
And, and I'm like, Oh yeah, I think I can do that. What can people find out more about you and your work? I should have a website. I don't though. Um, how do you think in my former life and in tech, I particularly should have a website. Well, uh, Judea has a website and on that he lists, uh, his papers. And most of my papers have been with him as a co author.
And so you'll find, you'll find my work on, on Judea's website at UCLA. But now you made me think I really should have a website. I'm going to, I'm going to make one.
Alex: You're also on Twitter, right?
Scott Mueller: Yes.
Alex: Oh yeah,
Scott Mueller: that's true. Yeah. I mean, I don't, I don't post a whole lot. Um, but uh, but I, I do enjoy, I do enjoy the academic side of Twitter.
That's true. That's true. Great. Scott, it was a pleasure. Thank you so much. My pleasure.