Causal Bandits Podcast

[Extra]: Mosquitos, Pascal & Hedge Funds || A Walk with Darko Matovski, PhD (causaLens) in London (2024)

Alex Molak Season 1 Episode 0

Send us a text

Support the show

Video version available on YouTube
Recorded on Sep 4, 2023 in London, UK


A causal bet
Darko's story begins in Eastern Europe, where his early attempts in building a business and the influence of early-stage role models shaped his attitudes and helped him move through challenging and lonely moments in his career.

See how mosquitos, Pascal programming language, and problems with generalization in vision models inspired Darko to build a company that helps some of the world's top companies streamline and deploy causal inference workflows today.

Learn how his hedge fund experience shaped his thinking about business.

Causal Bandits Extra is a series of conversations with non-technically-focused people involved in or interested in causality from business, social and other perspectives.

------------------------------------------------------------------------------------------------------

About The Guest
Darko Matovski, PhD is the co-founder and CEO of causaLens, a $50M venture-backed scaleup. He holds a PhD in Computer Science and an MBA from the University of Southampton.

Connect with Darko:
- Darko Matovski on LinkedIn: https://www.linkedin.com/in/matovski/
- causaLens web page

About The Host
Aleksander (Alex) Molak is an independent machine learning researcher, educator, entrepreneur and a best-selling author in the area of causal machine learning.

Connect with Alex:
- Alex on the Internet

Causal Bandits Team
Project Coordinator: Taiba Malik (https://www.instagram.com/taibasplay/) Video and Audio Editing: Navneet Sharma, Aleksander Molak *Action* Connect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/ Join Causal Python Weekly: https://causalpython.io Causal Bandits: https://causalbanditspodcast.com The Causal Book: https://amzn.to/3QhsRz4 *Sponsorship Disclaimer* This episode has been made possible with the support of causaLens. We appreciate their contribution to making

Support the show

Causal Bandits Podcast
Causal AI || Causal Machine Learning || Causal Inference & Discovery
Web: https://causalbanditspodcast.com

Connect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/
Join Causal Python Weekly: https://causalpython.io
The Causal Book: https://amzn.to/3QhsRz4

Darko Matovski: [00:00:00] Welcome to Causal Bandits Extra. 
Alex: Darko, you started causaLens around 2017. Causality back then was a very niche topic, I remember myself. 
Darko Matovski: Extremely niche. Yeah, how was it? How was the experience? It felt very lonely. We're now so excited to have this community that is growing at an exponential rate. But when we started, it was very, very lonely.
Um, but from from the first day, we felt that this is going to be big one day and we just wanted to make sure that we lay those foundations. So when the time comes, like 2023, um, and the community is growing exponentially, we have, we can support the community with cool success stories in the real world.
But it was, to answer your question, very lonely at the beginning, it was a handful of people. 
Alex: What made you take this route? 
Darko Matovski: Uh, it was a [00:01:00] contrarian bet. A contrarian bet. It was a contrarian bet. Is it, is it related? 
Alex: You're, a part of your story is related to 
Darko Matovski: hedge funds. That's right, yeah. So, I was in, in hedge funds before starting Quazilance.
Um, same, um, same for my co founder Max. He was also in the hedge fund world. He's also a physicist, right? He's a physicist, yes. Uh, so obviously I'm a computer scientist, yeah, and at the time I remember reading an FT, Financial Times article about AlphaGo beating the world champion at the game of Go. And everybody was getting extremely excited about deep learning at the time.
So everybody was pivoting all of their AI research towards deep learning. They're like, okay, deep learning is going to solve everything. Um, And Max and I had a very contrarian view. We felt deep learning is great in certain situations. Um, and, you know, if it's a board [00:02:00] game and there's fixed rules, then, you know, that's great.
But in the real world, um, we knew that it's just learning historical patterns can get you into trouble. And we both observed this working in the HedgeFund world where Um, just using correlations, um, you know, the past almost never repeats in financial markets. Um, so we also knew the other thing we learned from, from hedge funds is that the way to win big is to take a contrarian view.
If you do what everybody else is doing, you get paid a little bit, but not a lot. The time when you get paid a lot is when you do something completely different from everybody else. And you're right, of course you need to Mm-Hmm. , you know, you can't just be contrarian. You need to be contrarian and Right.
Um, luckily for us, um, causal AI was a contrarian bat that was Right. And is becoming more Right by the day. [00:03:00] 
Alex: So it sounds like you took two things from the experience that you gained in, in, in the hedge fund environment. One was regarding the, the, the nature of the patterns that that's fine. And the lesson that in the real world, the pattern might change rapidly and learning the future is not necessarily predictive of the future.
And the second one was. Regarding taking a bat on something. So, so this is a lesson 
Darko Matovski: from, that's right. Those are the two key lessons. Yeah, yeah, that's right. 
Alex: That's very interesting. What made you go into causality from the point of view of. Of your in the context of your career. So you started computer 
Darko Matovski: science.
That's right. Yeah. So when I was doing my PhD, it was in computer vision. It was, um, uh, in biometrics or recognizing things at at a [00:04:00] distance with with cameras and things like that. And we had tried many, many approaches there. We tried deep learning. We tried model based approaches. What we learned was that in the lab, Deep learning worked really well because the background was the same.
Um, everything was the same. So just throwing a lot, the more, the more data we threw, the better the deep learning became. Now, as soon as we took out the deep learning in the real world, it all broke down and we were trying to understand, you know, how, how can this be? Like we have. incredible performance in the lab.
As soon as we take it out in the real world, it just breaks down like catastrophically. It's not like small deterioration of performance. It just broke down. So then we decided to try something else. We decided to take a model based approach to learning. So say you can learn. The, [00:05:00] uh, the features of the human body, how the torso moves, how the arms move, how the legs move.
Or if you're doing facial recognition, uh, you can learn kind of a 3D shape of the head and you can say, well, the eyes have to like belong in this part of the model. They can't be, you know. Uh, where the ears are and so on. And we realized that the model-based approach had slightly worse performance in the lab.
Mm-Hmm. . But when we took it out in the real world, it retained its performance. Mm-Hmm. Um, so that was generalizing better. It was generalizing. so much better. Uh, that it was just not, not comparable. And that's really the other motivation for, for me for causality is that if you don't have a model and if you don't really understand the cause and effect mechanisms, uh, you can do really well in the lab, but you're not going to do really well in the real world.
And I think we've seen that in the enterprise [00:06:00] today, about 85. To 90%, depending on which statistic you read, of the projects never make it outside the lab. And this is fundamentally the reason for it, is that you get incredible results in the lab, but as soon as you take them out in the field, they either fail to deliver, or the humans that are going to use those algorithms and those applications do not trust it.
Because they don't understand what is the causal model behind it. Or what is the model behind it? And so, I think to be able to take AI from the lab to the real world, we need to solve two things. We need to solve one, you know, can, will this thing work in the real world? And second, will the humans using it?
Uh, trust it and we'll then understand how it works. So those are the two fundamental things that causality [00:07:00] helps us achieve. 
Alex: When you look at your clients, your customers, what are the main use cases where people apply causal models and when they give you feedback? What aspects of those causal models do you focus on?
What 
Darko Matovski: is important? Absolutely. I'll give you a concrete example with one of our most recent clients. Um, it's a really cool use case. Um, it's, it's around modeling a physical system. So think of it as a, could be a manufacturing process, could be a device. that fits into, into a machine, something that has a true, uh, true reality to it, like a, you know, a cause and effect mechanism in the real world.
So if you think of like a manufacturing, uh, line, there is a cause and effect transmission because there's like real parts connected to real parts. You know, maybe you have some sort of cog here, another cog here, there may be a [00:08:00] kind of a chain that connects the two. So there is like a real world causal link between one part of the process to another part.
There's like a chain that connects them. It's like truly, truly causal, right? This thing causes this other thing. There's no question about it. So, what they did is they um, they tried to solve this problem themselves with deep learning first. They threw a lot of data Had a problem. They kept doing it and it worked really well in the lab.
But soon as they took it out, it just failed catastrophically because, you know, the real world was different from from the data they had collected in the in the lab. Then they tried to work with six companies that do deep learning that thought, Okay, maybe it's us. Maybe we don't know how to fit a model.
Let's, let's try with get external help. They went through six companies. They got to a point where they improve the performance. But [00:09:00] their senior leadership did not understand how this thing works, and they were not able to explain, like, the real world connectivity in this machine, right? There's like, this cog moving here causes this other cog to move here.
They're not able to explain it. So they decided to abandon AI altogether. They're like, this is not ever going to work. And so one of their data scientists came across, um, causality and came across causal AI and us, and they decided to give it one last chance. We started doing causal discovery, so automated causal discovery on the data.
We discovered essentially kind of a digital twin of the machine. And then we were able to say, let's now give this proposed causal diagram to your domain experts. So people that really understand the ins and outs of this machine. The humans [00:10:00] looked at this causal diagram and said, it's mostly right. There are a few things that are not quite right.
This cog here is not really connected to this cog here. Therefore, there's not really a causal relationship, but because we never had a fault in this part of the machine, your algorithm hasn't detected a causal link. So they were able to actually insert that causal link themselves. So your causal 
Alex: discovery process is It's an iterative process where, where humans 
Darko Matovski: are also, that's right.
Humans are actually there before the models even get built. And that's fundamentally different from, from deep learning and generative AI, where you just throw a lot of data and you have a model and then, you know, good luck kind of thing. Whereas here we have the human involved from the very beginning.
How 
Alex: many steps do you usually? Um, a plan in a process like this. Yeah. So you start with expert knowledge, uh, then you push this expert knowledge to [00:11:00] the, to the algorithm, then the algorithm or algorithms as I suppose Yeah. Return, uh, some results. And then you again, consult the, the experts. Is is, 
Darko Matovski: is that the process?
Yeah, it's usually two or three rounds. Two or three rounds. Doesn't, doesn't take too long. Um, and then, you know, we build the causal model. When you were a child, Yeah. 
Alex: Was there anything, any thoughts or any ideas that you could connect to what you're doing today? 
Darko Matovski: Yeah, good question. I think the, uh, there was definitely no, I wasn't thinking too much about causal AI when I was a child, but I was very passionate about doing things, uh, building stuff in general that have a very positive, that have a positive impact.
So my, my first entrepreneurial venture was when I was maybe six or seven, I guess, first or second year at school. Uh, my cousin was at, uh, doing electronics at university and he had. [00:12:00] He was building this device that had like an ultrasound that was supposedly chasing mosquitoes. And this particular summer, uh, was pretty bad with mosquitoes.
Um, so I decided to, to, to learn how to do soldering and just build a bunch of these devices. And so then he gave you like a blueprint. It was the eyes and exactly he gave me the blueprint. So I didn't invent the device, but he had all the parts. I was like, how can I get 20 X the parts? I can build 20 of this.
And he's like, I don't know. I mean, I can bring, bring you some transistors and resistors and, uh, and these from the lab and you can build if you like. But why, why do you want to do that? I was like, well, you know, Everybody at school was complaining about insect bites. So maybe I could build something to help them.
[00:13:00] And maybe, you know, I can increase my pocket money in the same time. So I built 20 of these things and I took them to, well, actually I built one first, you know, an MVP to test the market. I went to school and I was able to sell it in the first 10 minutes of showing it. Oh, uh, so I decided to, you know, and there was, everybody was like, wanted to see it.
So I thought, okay, uh, I took some pre orders and decided to build 20 for the whole class. Um, and the, my entrepreneurial lesson there from this experiment was that, um, you need to also worry about, uh, payment terms and, and, uh, you know, discuss those things. Uh, before. Uh, giving the, the, the product away. Uh, so I ended up with, you know, substantial, uh, substantial lack of, uh, of payment.
Uh, but it doesn't matter. The whole class now had less in, in, in, uh, insect bites. So, it still [00:14:00] worked while trying. But I did learn a lesson there, that, you know, building product is important, but uh, having, making sure that there's a way to get paid for it is, is equally important. What was your 
Alex: next step 
Darko Matovski: after that?
Yeah, so after that I, I got hooked. I was like, this is so cool. You can like sit at home, build something that people want. You can go then and help people have a better life. Um, so that, that's, you know, one, um, one example there just inspired me. And I was just wondering, like, how can I now? Do this at scale.
So since I was six or seven, uh, I was just looking for the way to make an impact. And obviously, you know, I was growing up at a time when the internet became a thing, um, and, and programming became a thing. So I remember I still didn't have a computer. It was maybe 1995 or something like that. [00:15:00] Um, one of my, one of my cousin's friends, um, Uh, was, uh, working for a, for a U.
S. company, uh, coding and, um, and I was like, okay, what, what is this thing? Coding? And so he said, look, I can just give you my book and all you need to do is, is learn this book. And obviously he was massively, uh, simplifying it for a, for a young child. But it's like, if you learn this book, it's going to open up a whole new world of software that is going to change the world.
Um, so effectively, I think, uh, that was kind of the inspiration to go into software. So I spent a whole summer learning Pascal and Delphi, and I was like, wow, this is actually so much faster. building, uh, and soldering stuff. Once you build one of these things, you can multiply it by infinite amount. So I found, I thought this is the vehicle to really make an impact on the world.
Software. 
Alex: [00:16:00] Darko, you mentioned your, your car, your cousin. It seems like he was an important person 
Darko Matovski: in your life. Very important. Yeah. It just highlights, um, the, the role of role models, uh, when you're in your formative years of childhood, just being, um, surrounded by someone that was building electronics and, and software was, was, you know, truly change, uh, the direction for me, uh, for life.
It was really, uh, really formative. 
Alex: What would be your advice to people who are starting with something, maybe people who are just starting with causality, and then they might feel a little bit overwhelmed with all the tooling, all the terminology, and they might feel a little bit unsure if they can really make it.
What would be your 
Darko Matovski: advice to them? First of all, you should be confident that you will definitely make it. Causal AI is now actually accessible. We have books. We have open [00:17:00] source packages. It's in a so much better state than we started a company with Max in 2017. So, uh, first I would say, read the book, read the book of why it's incredibly important from us from a foundational, um, um, from a foundational perspective.
So kind of the theory. Uh, to learn the causal theory and that will help you understand all of the buzzwords and causality. That one book, read it twice if necessary, but that book will be, will be really formative for you. Uh, then the second thing you can do is, uh, play around, uh, with open source tooling.
There's loads of cool stuff coming out. Um, new, new open source, uh, causal packages are popping up. And there's really great examples. We hope to help the community in the future by open sourcing some of our cool algorithms. We also hope to make our enterprise platform free, you know, either community version or free to try, which will make that journey even easier.
And [00:18:00] we're working very hard to make that, uh, happen, kind of unifying, uh, the interfaces. So using similar calls for all the algorithms from, from all the different, um, uh, kind of creators. Uh, but this, yeah, just the book of why. And the open source stuff that is currently available and you have great blogs as well.
Um, so read Alex's blogs. Uh, that's really, really, really cool as well. Um, but I think, you know, we also will prepare a lot more, um, for the data science community. Um, come to our conference. We're organizing the causal AI conference. It's a community event. Uh, we do it for the community and, you know, we don't do any selling, no marketing.
Um, and next year we will be, you know, somewhere in the U. S. or the U. K. We'll see. Um, but please come to that. That will be also a great opportunity to learn about how to get started. 
Alex: Darko, some people say [00:19:00] that, uh, that generative AI is the future. Uh, we repeatedly said today in the conversation that we, that you believe, I share this belief with you, uh, that causal AI or causal thinking is, is, is also something that will play important role in development of intelligence systems, decision making systems in the future.
Uh, what are your thoughts about? When you look at those both ideas, generative ideas and, and causal ideas, how do you see them developing 
Darko Matovski: further in the future? Gen AI is amazing for certain things. And causal AI is really amazing with other things. So the point is that when you combine these technologies together.
One plus one equals three. And we, therefore, see a huge, uh, potential, and quite an unexploited potential, actually, to combine these two technologies, uh, together. And we will need, um, other [00:20:00] building blocks as well. I think Gen AI and Causal AI are great building blocks, but we'll need a few more. Um, and we might not know what exactly those are today.
Alex: What would be your guesses regarding those buildings? What might we be 
Darko Matovski: missing today? Yeah, that's a question I think a lot about, but I don't think, if I'm entirely honest, I don't think I have the answer. Because the way I think about, you know, we don't fully understand, um, you know, it's the unknown unknowns, I guess.
And we will need to write a new theory. It's kind of how we wrote the causal AI. And I think we'll be at a, at a, at the moment in the future again, where we'll need, when we realize that gen AI and causal AI put together is, it is three, but we need a few more points to get to, to AGI. And I think we, we, we first need to take this first step of [00:21:00] combining these two technologies and realizing where there is a gap.
I think we just have to do that hard work and research. That's what we need to do to figure that out before we realize that there's a gap. And my gut feeling is there will be still a gap that we then need to come up with new theory, new research to fill that gap. But just to be clear, I think just using causal AI and, and JNI, we can get.
There's a lack of harmonization of the packages. Uh, so we, we think that an interface that will unify, say, all of the causal discovery packages working the same way, all the causal modeling packages working the same way, all the decision intelligent engines working a similar way. I think that's a big gap and it's, it's natural part of the evolution, but we need to get there.
We need to be like Escalon. We need to have the equivalent of, of causal packages to do that. Um, um, um. 
Alex: We are missing what you're saying is that we are missing the unification. That's 
Darko Matovski: right. Exactly. That's right. That's right. [00:22:00] Community. That's a big gap for us. And, uh, we actually will have the community by open sourcing our interfaces to help achieve this.
And we hope that the community will adopt this. To help us unify and be, you know, uh, you know, for us to have a serious chance at, at adoption, we need to do this. 
Alex: Darko, it was a pleasure. Let's have some natas. I'm ready to celebrate our conversation. 
Darko Matovski: Look at this beauty.