Causal Bandits Podcast

Why Hinton Was Wrong, Causal AI & Science | Thanos Vlontzos Ep 15 | CausalBanditsPodcast.com

May 06, 2024 Alex Molak Season 1 Episode 15
Why Hinton Was Wrong, Causal AI & Science | Thanos Vlontzos Ep 15 | CausalBanditsPodcast.com
Causal Bandits Podcast
More Info
Causal Bandits Podcast
Why Hinton Was Wrong, Causal AI & Science | Thanos Vlontzos Ep 15 | CausalBanditsPodcast.com
May 06, 2024 Season 1 Episode 15
Alex Molak

Send us a Text Message.

Recorded on Jan 17, 2024 in London, UK.

Video version available here

What makes so many predictions about the future of AI wrong?

And what's possible with the current paradigm?

From medical imaging to song recommendations, the association-based paradigm of learning can be helpful, but is not sufficient to answer our most interesting questions.

Meet Athanasios (Thanos) Vlontzos who looks for inspirations everywhere around him to build causal machine learning and causal inference systems at Spotify's Advanced Causal Inference Lab.

In the episode we discuss:
- Why is causal discovery a better riddle than causal inference?
- Will radiologists be replaced by AI in 2024 or 2025?
- What are causal AI skeptics missing?
- Can causality emerge in Euclidean latent space?

Ready to dive in?

About The Guest
Athanasios (Thanos) Vlontzos, PhD is a Research Scientist at Advanced Causal Inference Lab at Spotify. Previousl;y, he worked at Apple, at SETI Institute with NASA stakeholders and published in some of the best scientific journals, including Nature Machine Learning. He's specialized in causal modeling, causal inferernce, causal discovery and medical imaging.

Connect with Athanasios:
- Athanasios on Twitter/X
- Athanasios on LinkedIn
- Athanasios's web page

About The Host
Aleksander (Alex) Molak is an independent machine learning researcher, educator, entrepreneur and a best-selling author in the area of causality.

Connect with Alex:
- Alex on the Internet

Links
The full list of links can be found here.

Support the Show.

Causal Bandits Podcast
Causal AI || Causal Machine Learning || Causal Inference & Discovery
Web: https://causalbanditspodcast.com

Connect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/
Join Causal Python Weekly: https://causalpython.io
The Causal Book: https://amzn.to/3QhsRz4

Show Notes Transcript

Send us a Text Message.

Recorded on Jan 17, 2024 in London, UK.

Video version available here

What makes so many predictions about the future of AI wrong?

And what's possible with the current paradigm?

From medical imaging to song recommendations, the association-based paradigm of learning can be helpful, but is not sufficient to answer our most interesting questions.

Meet Athanasios (Thanos) Vlontzos who looks for inspirations everywhere around him to build causal machine learning and causal inference systems at Spotify's Advanced Causal Inference Lab.

In the episode we discuss:
- Why is causal discovery a better riddle than causal inference?
- Will radiologists be replaced by AI in 2024 or 2025?
- What are causal AI skeptics missing?
- Can causality emerge in Euclidean latent space?

Ready to dive in?

About The Guest
Athanasios (Thanos) Vlontzos, PhD is a Research Scientist at Advanced Causal Inference Lab at Spotify. Previousl;y, he worked at Apple, at SETI Institute with NASA stakeholders and published in some of the best scientific journals, including Nature Machine Learning. He's specialized in causal modeling, causal inferernce, causal discovery and medical imaging.

Connect with Athanasios:
- Athanasios on Twitter/X
- Athanasios on LinkedIn
- Athanasios's web page

About The Host
Aleksander (Alex) Molak is an independent machine learning researcher, educator, entrepreneur and a best-selling author in the area of causality.

Connect with Alex:
- Alex on the Internet

Links
The full list of links can be found here.

Support the Show.

Causal Bandits Podcast
Causal AI || Causal Machine Learning || Causal Inference & Discovery
Web: https://causalbanditspodcast.com

Connect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/
Join Causal Python Weekly: https://causalpython.io
The Causal Book: https://amzn.to/3QhsRz4

 015 - CB015 - Athanasios Vlontzos - Audio Transcript

Athanasios Vlontzos: Science is, I think I mentioned that before, it's not a linear thing. It is very much all over the place, and it is the essence of exploration that kind of drives us. This notion of causality that we, for example, are doing here in Spotify and I've been doing in my in the later parts of my PhD back in Imperial, is not only on the modeling.

And I think it's wrong to think that any causality comes only in the modeling part.

Alex: If you could distribute like 1 billion for causal research, what would you invest in? 

Athanasios Vlontzos: Causality does not emerge. In a flat PLE in space. So if you want a latent space that has emerging causality in it, 

Marcus: hey, causal bandits.

Welcome to the Causal Bandits Podcast, the best podcast on causality and machine learning on the internet. 

Jessie: This week we're going back to London to meet our guests. He loves puzzles and believes that causal discovery is a better one than causal inference. He worked for Apple and with NASA stakeholders at SETI Institute, but found his home at Spotify.

He's a fan of Star Trek, a former radio host and a computer scientist specialized in medical imaging. Research scientist at Advanced Causal Inference Lab at Spotify, ladies and gentlemen, please welcome Dr. Athanasios Vlontzos. Lemme pass it to your host, Alex Molak. 

Athanasios Vlontzos: Thank you for having me. 

Alex: Welcome to the show. 

Athanasios Vlontzos: Thank you.

It's a very nice and cold day here in London, and I'm very happy that you're here. We get to have this nice discussion. 

Alex: I'm also very happy. Thank you for joining us. I want to start our conversation with a quote. I think if you're a radiologist, you are like the coyote in the cartoon. You're already over the edge of the cliff, but you haven't yet looked down.

There's no ground underneath. People should stop training radiologists now. It's completely obvious that in five years, deep learning is going to do better than radiologists. Jeffrey Hinton, November 24th. 2016. What's your comment on this? 

Athanasios Vlontzos: That's a very nice one. And to be honest, I had completely forgotten that Hinton had even said that back then.

I personally, a lot of other people in the community of the medical imaging and definitely a lot of radiologists would respectfully disagree. And it's kind of tough going against one of the godfathers of like deep learning, right? I think. That well, we are eight years past that point and like almost a decade and we still haven't had like radiologists Being replaced by AI for multiple reasons, really one of the points is that every ideologist job is not only on identifying lesions or making metrics of, for example, the circumference of a baby's head in ultrasound, it comes a lot with the decision making the evaluating of the things that they see in the image and something that systems like deep learning systems cannot yet learn that well is associations between Different things, different things that in the first level or in the first pass.

Look completely relevant, but at the end of the day, they are. And this is where the human brain with the experience, uh, like that they have amassed by being radiologists, by going through 10 or more years of like medical school training excels at. And this is something very, very difficult to replace with a ML system.

And this is going to like, so this is one very key point. And the other one is really the decision making, like you making life and death decisions for patients. With being a radiologist or any sort of doctor for that matter and these kind of decision making things is still ML like a system system are not there yet to be able to make in a robust and way that we can trust them completely.

Maybe with more causal learning and more causal kind of inference and discovery, you can get there, but like, not now. And then moreover, moving away a bit from the purely technical and theoretical points of the building the ML system. It is the legal and the ethical point of view, because again, we are making life and death decisions.

Like we are affecting a person's life by making the medical diagnosis with them. And like, if we make decisions. A wrong diagnosis, who's to blame? Is it the model that is a few lines of code? Is it the company that build it or is it the doctor that kind of, okay. Did it So like these kind of legal considerations need to be like built in and we still as a society haven't progressed to the, to the point that we can replace radiologists with AI systems.

We definitely, and I think this is the key point that the entire community of the medical imaging field is trying to do is like. Give better tools, give better tools, build better tools that can enable radiologists to do their job Much better and much more efficient but not replace them because of these human expertise this deep knowledge that they have is irreplaceable Even to build more more models and better models.

We still need them maybe in the future We won't need as many radiologists, but we're definitely going to be needing some real radiologists 

Alex: What do you see as the main technical challenges? In the context of this quote and in the context of what you just said, what would we need to overcome in order to make the vision that Hinton is presenting here true?

Athanasios Vlontzos: This is a difficult one, mostly because the further deep and in you probe, the more challenges you'll face. Come up with more. A lot of them interconnected and a lot of them very, very crucial to the success of such a vision. It really starts off with your data. Really, I think Hinton will completely agree that having good data and good quality data is key in order to build any sort of ML algorithm, but like what actually constitutes good data.

And, for example, in the medical field, you need to have a very representative like data set. If you, for example, over the, in the very start of the pandemic, let me like do a little detour. Anecdotal evidence here. In the very start of the pandemic, there was a series of papers and a series of data sets that came up trying to build ML tools, AI tools that can code.

Like, uh, detect COVID. However, if you actually probed a bit deeper into them, you would see that all the positives were coming from x rays from China, while all the negatives were x rays from Stanford in the U. S. That essentially you build a very biased and a very not that great kind of data set, because you're basically building a classifier whether or not the patient had an x ray in China or in the U.

S., rather than if they had like COVID or not. So this kind of thing is one, like, and if we go deeper into a more, like, because x rays are fairly easy medical modality, if you go deeper into ultrasounds or, like, MRIs, PETs, PET CT, you know, PET MRI, that is. Et cetera, et cetera. You will see that you have variations even within the same machine, but like with different, in different hospitals.

So like, if you have a Siemens machine in here in London and a Siemens machine up in Edinburgh, based on how the doctors use it, you might have a different quality of image and hence you can have this unobserved kind of like, you know, confounding elements here because that affect the output image and hence your diagnosis.

So in reality, thinking about this in actually a causal way might help you build a better dataset. This notion of causality that we, for example, are doing here in Spotify and I've been doing in my, in the later parts of my PhD back when I, like back in Imperial is not only on the modeling part. And I think it's wrong to think that Any causality comes only in the modeling part.

It comes in the entire system building process. From the data collection, thinking about which parameters come into play, obviously the data modeling, and then to actually making it robust and serving it to the end customer. So, this is, for example, a very, like, crucial point. Gathering correct data, especially in the medical field, is extremely hard and extremely crucial in the The success of this kind of vision, then comes the actual modeling thing.

Like, again, we have that pretty much is as any other modeling operation needs to abide by certain like rules in order to make some like good and robust models. And finally, it is how you actually implement them. This thing's like, um, how you implement them in production. Are they going to be replacing like totally?

Radiologists like Hinton suggested, or are they going to be tools? In my opinion, they should be tools. You never should replace these people. But you can give them the correct tools to make their lives easier. And hence, see more patients. and diagnose patients more correctly and more fast. But yeah, I think these are like some, on a very high level, these are some very crucial challenges.

And it all comes down to the way of thinking and approaching a problem. I think it's a very nice quote, a very good friend of mine, who right now is in Berlin, said back when I was in high school, like, you only solve a problem when you realize what is a problem. And this is essentially it. Like, you need to realize what is a problem, Identify the key elements of the problem and then you can solve it.

It's not only the modeling. It's not only the data. It's a lot of other both said and unsaid kind of components that we introduce. 

Alex: I really love this quote and I want to go back to this quote in a second. But before I want to ask you another question. In one of your papers, you look at models or automated systems for medical imaging analysis, and perhaps decision making, maybe just in intention, through the lens of technology readiness levels.

And one of the things that you and your co authors write about is that many systems today In the context of medicine of health are essentially jumping from one level to another passing two or three crucial levels on the way. Can you share with our audience a little bit more? Regarding this topic and and maybe give us a little bit of a hint why these things are important in practice, 

Athanasios Vlontzos: of course, of course, so just for like the benefit of the audience here, the technology readiness levels came about from the US military back in the 50s.

If I'm not mistaken, they are still used. They actually have ISO standards and they are used by the military. They're used by NASA. They're used by a lot of like People that in the aerospace specifically industry they are a way of formalizing and making a framework of how to develop any sort of technology and.

This allows them to evaluate the readiness of this technology before you actually producing they are like flight ready as they would say in the airspace a few years ago. Some people that with some, I've actually collaborated and they're like amazing, amazing at their job, including, well, Kieran that is here in Spotify, including Yaringal, including other stakeholders from like NASA, SETI Institute, Intel.

A bunch of startups, they produced a paper that discusses the technology readiness levels as they apply to ML and machine learning algorithms. And they go through all these kind of things and they kind of map in the kind of. Roadmap, like they map the technology readiness levels to the roadmap of developing an AI product.

Uh, what we said in our paper with Bernhard, you know, my two PhD supervisors back, if I recall correctly, to 2022. So like that the paper was written in 22 1, but like it took some while, some time until it was made public. And what we argued there is that even in the ML for medical imaging, imaging field, you still need these things.

These are kind of universal. These are. Task and application agnostic. And when a lot of people just jump through them, they might not give enough consideration in each one. Jump around and then it becomes kind of quite of a mess, really. And this, like, introduces problems, really, when you try to develop such a thing.

So going steadily through these, like, technology readiness levels, having checks and balances essentially makes sure that everything that you're doing is okay. Really. And there is accountability for what's going on because exactly in the medical field, as we said before, the it is life and death in the aeronautics field, it is life and death as well, as we have recently learned, when we move away from just simple applications like Chad ZPT might help you write a better essay, for sure, but it's not mission critical kind of application, we're going to these kind of fields of aeronautics of medicine, etc.

We need to be sure and robust. And essentially, this is this framework helps you. Do that. What we argued in that paper as well was that you can introduce a bit the essence of causality. Actually you can do that, as I said before, throughout the technology readiness levels, but like mostly in the point that after you have built some kind of MVP, then you can make it robust.

It's this kind of robustification of your system. In other like fields, it might have been like making it robust to radiation when you like shoot things up in space. But in this case, it's just making sure that. The model is not looking at the spurious correlations that will throw it off. 

Alex: I want to go back now to the quote that you shared before.

So the quote says, and correct me if I misremembered it, that we can only solve a problem if we know that it is a problem. When we were talking about medical imaging now and about technology readiness levels, I had this thought that in broad machine learning community, we often forget the Those models, deep learning models, for instance, but they're also based on assumptions.

And so during let's call it the last stage of deep learning revolution, we're moving from statistical parametric learning into non parametric large models. And it seems that. Somewhere on the way, we forgot that, although we are getting rid of some assumptions, so neural networks is, neural network is a more flexible model in terms of assumptions than linear regression, for instance, is, there are still some assumptions underlying those models.

And if we want those models to be robust and to generalize, we need to remember about those assumptions. 

Athanasios Vlontzos: Yeah, absolutely. Like, um, I don't think we can get away with no assumptions. It's practically impossible to do that. We always going to make some assumptions, but not all assumptions are created equal.

There are assumptions that are very valid, like the sun is gonna, like, rise up tomorrow morning from the east, that's fine, we can live with that assumption, but there are other assumptions that are not very valid, the assumption that mother is gonna work only if there are pink elephants on the moon drinking tea, obviously is not gonna be, it's not gonna live that well.

The point of the assumptions, yes, we are relaxing some assumptions, we are, the problem that I see is that sometimes we are forgetting some assumptions. Anything will have assumptions. A common criticism of causal inference and causal discovery is that it relies on a lot of assumptions, but at least we're making the assumptions very much explicit.

The assumptions are there. Anybody can test them. We try again and again to relax them, to challenge them.

Alex: And we can disagree about them as well and discuss them because we need to explicitly say Well, I'm coming what I'm putting on the table. I need to tell you in advance.

Athanasios Vlontzos: Exactly. You know, this is a very, very spot on and this is a great way of doing science and technology.

There are other assumptions that sometimes they are forgotten and these are the ones that sometimes create problems. For example, a very like a very standard thing that I always point out in this kind of discussions is the assumption that models are. have Euclidean latent spaces. This is not true. It was a paper, I think, 2017 or 2018 by Loizu Zanal or Arvanitida Zanal.

No, Arvanitida Zanal, called Latent Space Oddity. Very nice title. I'm very, very jealous of that title. And what they were saying was that they were probing a very standard VA variation autoencoder that, and they were saying that, okay, the, all the assumptions that were made is like all this kind of optimization that we're doing assumes a Euclidean latent space, how, and a very standard kind of like a distributed Gaussian distribution that lives in a Euclidean space, but if we actually probe it, The model is not building such a thing.

It's building a highly, high dimensional Riemannian manifold. That is a bit hard to characterize, but it is a Riemannian manifold. And they showed that by doing some interpolations. If they followed the manifold, they had perfect interpolations from point A to point B. While if they did linear and Euclidean interpolations, it was in between the two points.

You get noise. Some people have, to their big credit, challenge this assumption, say build a hyperbolic. Machine learning models, and they have some amazing papers out. I very much respect all these kind of line of investigation. And this is very nice because it was challenged. This assumption is challenged.

And I think this is the key point here that the all the assumptions, there's always going to be assumptions. Not all assumptions are created equal. So we need to pick which assumptions we can live with. And after that, We need to make very explicit what these assumptions are and not forget about them, because if we forget about them, then the room is going to be like looking spotless, but underneath there is going to be a massive stain and that can cause problems afterwards.

Alex: In one of the projects that you worked on here at Spotify with Jacob Zeitler was also guest at our podcast. You worked on adaptation of a synthetic control estimator. to one of your use cases here at the company and also looked into sensitivity analysis. So combined, you did a couple of interesting things in this paper, I think in one interesting thing is that you combine the estimator coming more from the potential outcomes or econometrics literature with identification strategies that are coming from Perlian, more from the Perlian tradition, and then you also.

put a sensitivity analysis model on top of this. Can you tell us a little bit more about what was your thinking in this project in the context of, of what you just mentioned, assumptions and how to deal with assumptions when we are faced with a particular real world? Problem. 

Athanasios Vlontzos: Yeah, absolutely. First of all, like Jacob's podcast episode with you actually aired last week from the time of recording.

So it's very nice. So it's a continuity, but, and we also need to give credit to both Jacob and Kieran that did a lot of work, especially Jacob, who was interning with us last year here. Yeah. So that paper was very interesting because exactly, we took synthetic control, something very common in econometrics.

We took sensitivity analysis, again, fairly common in some fields. We mired them together with a bit of like the Pearlian reasoning framework and challenged some assumptions that were made there. Funnily enough, when we published that in clear in last year, Perl did tweet about it, and it was very nice. It was my second interaction with Perl, and it's very nice to get recognized by a person of that caliber.

And, um, but Thank you. But yeah, essentially, this is it. Like, when we are faced with real life problems, like the ones we're facing here in the company, like the ones we face when we deal with medical imaging, like the ones that we face at the SET Institute that you mentioned, we need to be pragmatic about it.

We need to It actually poses an interesting challenge, even in my mind, at least, and in my psyche, a bit more interesting challenge and like than blue sky research, because you have real life constraints. We cannot do anything. We cannot make any sort of assumption. We need to tell. In real life, we need to live with the real world around us, and this creates a very interesting kind of testbed to build new models.

Essentially, this is what we did. Like, we tried to challenge a bit the, the way of thinking, like, uh, and combine, hence, uh, synthetic controls with, like, the, and, Formalized, I mean, under the Perlian kind of framework, and, uh, the results, to be honest, are quite nice, and the sustainability analysis really comes there and ties it all together, because, exactly of this thing, we wanted to know when these assumptions and when these kind of models would essentially fail, and what would take it, what kind of, like, circumstances Would actually lead them to fail and hence create something that is very robust and very good.

Alex: In this context, what would you say to people who come from an angle where they criticize a causal inference and they say, like, just to give you some like approximate quotes from some of my content that I read sometimes, causal inference is only in its very early years. You cannot apply this in practice.

Like the assumptions make it completely useless. So there are quotes like this. And actually, I. Started looking at people who are producing this kind of, uh, comments. And I noticed that there's a common thread. Uh, 

Athanasios Vlontzos: how so? What, what is a common thread? 

Alex: Well, , it's actually, I don't know if I wanna, I don't want to stigmatize in any group you know of people, so I'll pass on this question, but, but there is one, so I'll tell you off the record, as a person who worked with.

With causal models and apply them in practice in context of medical imaging in context of a company that works with with artists with music and so on and so on. Will be your answer to those people from a point of view of a person who actually does this in practice? 

Athanasios Vlontzos: Well, first of all, I would point them to decades and decades long of literature and practice in both econometrics and epidemiology.

That. They are using a lot of causal models like the, a lot of things that we take from like, and we apply in the computer science kind of field, do come from epidemiology, do come from econometrics. For example, the paper that we had together with Kieran and Ben Hurd in Nature, Myths, and Intelligence relies on Result relies on an identifiability.

Result that actually is derived from epidemiology is the, the constraint of no prevention, where essentially this monotonicity constraint that is imposed in order to create a setting that things are identifiable and like counterfactuals are identifiable. Those come from epidemiology. Simply, first of all, point them towards that.

There's like thousands upon thousands of papers and practice, practical results that do showcase that causal way of thinking does work. It is causal inference and both like and causal discovery do work because in order to solve any problem you need to think in a causal manner. You might not need to build an 100 percent causal model, you might get away with an approximate one for sure, but the only by employing these kind of reasoning techniques on what actually could affect it and like cut off and control for variables that might be affecting your results Lead you to a better model.

And this is the true real life application and value of this, of these things. I wouldn't call causal inference only a modeling tool. I would call it a way of reasoning, a way of thinking about this. And, but if we want to be very specific about like modeling and building causal models, I've seen a lot of causal models work very, very well, both in academic and industrial like applications.

Alex: Going back to your first point, you said you would point those people to years of research in econometrics and epidemiology, probably a valid. critical reply to this could be that many of those models actually failed, right? Or to say this more generally, that some people historically applied causal methods, maybe ignoring assumptions or maybe being too optimistic regarding those assumptions, which resulted in not very good performance of those models.

Athanasios Vlontzos: Yeah, for sure. Like, uh, it showed me a feel that you build a model and like, it always works. There's no such thing. Like chat GPT, for example, like that makes no causal assumptions, still kind of fails. It hallucinates like badly and can provide you with very bad like answers. 

Alex: My favorite, my favorite chat GPT behavior is when it tells me, I'm asking like, Hey, is this paragraph correct in, I don't know, British or American English because you know, I write something longer.

I just, it's very convenient, and then it tells me like, Oh, it's mostly correct, but you should add a comma after this word. Then I look at this word, and this is the last word in the sentence. So it proposes that I add a comma before the period in the sentence. And so there's, these are like amusing things, right?

But when we think about this, when we project it into some potential real world applications where serious stuff depends on this, these kinds of mistakes, and I think it's important to remember that this Kinds of mistakes doesn't mean literally suggesting adding a comma. This kind of mistake might be any other mistake that translates from the latent space to whatever kind of output, right?

That they might be very, very risky if we think about applying those things in the real world. 

Athanasios Vlontzos: Yeah, absolutely. Like, I totally agree with this. And to be honest, let's go back to the previous point that you said and the valid criticism. Like, kind of a, you provided your own answer in that valid criticism.

That is, these are assumptions, like all these models that causal or not causal, they are built in assumptions. As long as we make these assumptions explicit and be able to test them and realize when it works and when it doesn't work. Then we are okay. In the same paper, in the same paper that we produced with Kieran Bernhardt in Nature Machine Intelligence, like, when we send it off to review, one of the valid criticisms is that not everything is, like, monotonic.

Sure, of course, not everything is monotonic, but this is down to the ability of any form of analyst to realize when they should be using this tool. You will want to put up a frame in your bedroom with a nice, like, picture. You're not gonna be using a screwdriver. You're gonna probably be using a hammer to, like, hammer the nail and, like, then hang the, like, the frame.

It's not, there's no panacea. Causality and causal modeling is not a panacea. But it can help you in a lot of circumstances. I don't think there is a penalty for anything really. 

Alex: Yeah, I think that's a, that's a very valid point. And perhaps it's a natural cycle with any technology that if there is something new that forces us to question our own assumptions or our own modus operandi, like the way we used to do things, then we will be susceptible to finding flaws in this stuff.

Even if those flaws are something that. It's also present in stuff that we are doing today, but we are just not noticing the flaws in the stuff that we are doing today. We are habituating this stuff, but something new comes in like, ah, there is a holder. 

Athanasios Vlontzos: Yeah, no, no, absolutely. Like, uh, this is our maybe inertia and of not wanting change and wanting like to remain at the status quo.

Yes. Very humane. It's very humane. It's very natural. It's not something. Uh, we should criticize people about it. It is something that is very natural and like comes with it, but we should have an open mind. At the end of the day, it's something that we should kind of like be a bit brave and like try to explore a bit like, okay, this looks promising.

Let's explore a bit more into it. See what happens. Sometimes these kind of bets don't work out for sure. Like, uh, I think there was like, for example, guns, like the, like, generative adversarial networks, right? Like, huge, like, thing. And they, they kind of died off in popularity afterwards, uh, because people started probing into them, like, making them better, like, getting the elements.

Of them that are very crucial, very good and using them, couldn't use them, but other elements that leaving behind. And I think at the end of the day, this is it. While we probe and we try to figure out new things in technology and science and all that. We will find, we will find learnings and lessons that are very important and we'll keep them with us and some of them we'll leave aside.

Again, perfect, like it is a natural course of things. 

Alex: The guy who invited the ship also has invited the shipwreck. 

Athanasios Vlontzos: Yeah, sure. Exactly, yeah. They, but like the ships are great, right? But yeah, like this is something we need to like keep in mind, like, uh, science and technology rarely, if ever, are linear. They are paths with a lot of turns, twists, going up, down, back, forwards, everything.

And this is what is fun about it. This mode of exploring and having things to guide you, like an idea to guide you, or like guiding principles or expertise, knowledge in a specific domain, can help you go to from point A to point B faster. Sometimes it can even like, Veer you off course, but like most of the time, so there's like the collective expertise of humanity is significant enough to like, no, okay, that is a fairly good way to go.

Alex: What is the craziest but still meaningful outcome of your explorations? Maybe you started with causality, but you went somewhere. I know you like, like physics. 

Athanasios Vlontzos: Yeah, I think. Oh, that's a tough one. I am a person that likes to explore a lot of academic things. And I think that is very much evident in, for example, my, both my undergrad and my PhD.

In the PhD, I started off with reinforcement learning and computer vision. Ended up with, well, the computer vision kind of stuck around, but like ended up with causality. I think the most interesting thing that I discovered was how we can create this interdisciplinary world of research that. We borrow things from different fields and combine them together.

And that was, for example, the paper that we did with a friend of mine, who's a physicist from Edinburgh, and again, my two supervisors. That was the Minkowski space time paper. That was very, very fun. Again, that paper actually never got published anywhere. It only got, uh, we only put it on archive and then it got picked up by MIT Tech Review.

That was actually quite fun. But in that paper, It forced me to think and challenge a lot of assumptions. One of the assumptions that it forced me to challenge was the one of the Ukrainian, for example, space. And I realized then, and by reading papers from like 1964 and zeman, that in order for example, to have causality, you need a ian group space.

You can, like causality does not emerge in a flat Euclidean space. So if you want a latent space that has emergent causality in it. You need to have such a space. You cannot model things there. And that's why, for example, I make some choices in building models in this weird Riemannian spaces. But, yeah, I think that that was, uh, one of the craziest kind of, like, explorations that I did.

In retrospect, maybe, like, for some people in, like, the audience or the viewers or, like, audience, it, like, might not sound that crazy, but, like, back then, it was, uh, for me, quite challenging. Quite out there and I still believe that paper is very much out there 

Alex: We'll link to the paper in the in the show notes and in the video description So everybody can judge for themselves how they live the level of craziness you can comment under the video just what?

How crazy is this paper between 1 and 10? 

Athanasios Vlontzos: Yeah To be honest like this is like crackpot science is the best kind of science the side like the fringe science the science that is Silence is everything and tries to look at things in a different way. This is where the magic happens in my opinion. 

Alex: Hmm.

What's the best thing about working? At the Advanced Causal Inference Lab at Spotify. 

Athanasios Vlontzos: Well, there are two things, really. Well, multiple things, now that I think of it. First of all, it is actually working on Spotify. Spotify is a great place. It is a very collaborative one, and a very open one, and a very artistic one, in the sense that people are, they can, they want to express themselves, and they do that through their work, and creates a very, very nice environment.

It's one of the few working environments that I have been that Culture of this, of the place is excellent. Another great thing on working in this lab is the expertise. We have Kiran leading it. We have Oriol and Michael, shout out to all the people there in the ACI lab, that are great at what they do.

Like, and each of us brings a different element. And I think this is, again, goes back to the interdisciplinary kind of way of thinking. I think Kieran, for example, leading the group, acknowledges this. And I think the entire company acknowledges interdisciplinary thinking is key. We have, well, I come from a background that is more like engineering, electrical engineering first, computer science afterwards.

Thinking about medical things, working with medical things. We have Oriel that we used to be a software engineer, then went to research, did a PhD and did research and is an expert in reinforcement learning. We have Michael that used to deal with black holes. He was an astrophysicist and then took a turn and went to like research in like more data science.

And then research scientist here. And then we have Karen that used to be a quantum quantum physicist dealing with quantum computing and now deals with causality and in a product sense it is this kind of like amalgamation of different expertises and different this melting pot of different people and different personalities that creates a very nice and unique result really that create like that fosters creativity.

I've seen that before, to be honest, and it's not only the first time that I see this, but I'm very, very happy that I found this here. I've seen it in the SETI times, uh, that we were two machine learning people, two solar physicists and one ionospheric physicist, and that we tried to predict the problems in the ionosphere because of solar weather.

The, uh, Weirdly enough, that actually worked. Uh, it was like, uh, it is a very fun project that we did that. But yeah, this kind of interdisciplinary way of thinking created some excellent results and I've seen it. It's not the first time that happens. If you go even back in the history of the industry, you can go back to the forties and 1940s and Lockheed Martin.

Lockheed Martin, there's Skunk Works, which is kind of the experimental division of Lockheed Martin. And, and they. Even from the forties and the fifties, they had created a very unique culture that is small teams, very diverse teams that can move fast and challenge the status quo. And this is essentially what we kind of try to create here.

We need to, like, be a diverse team because each one of us kind of brings us a key element that can solve any problem. 

Alex: I remember you mentioned working in causality in the context, in the product context. What are the main lessons that you learned working on causal modeling or causal thinking broad causal reasoning maybe more broadly in the context of product developing a product or improving a product.

Athanasios Vlontzos: The most surprising to be honest that is like when you not surprising like it's surprising at first but then in retrospect obvious it is that people. Regardless if they have been in contact with, for example, the Perlian point of view, or the Rubin's potential outcomes, they have an innate understanding of causality.

They might not call it causality. They might call it something different. It doesn't really matter. They can understand it. So, this is the most crucial part because, like, the The stage is already set for us to go and like build up our things because people understand it any sort of person understands it because this is how we learn like from when we were babies we say okay I push this it like push this mug it fell down.

Crack. It was because of my movement that happened. I didn't spontaneously do this. We are built maybe evolutionary. I'm not an evolutionary biologist. I have no idea. I'm not a neuroscientist either. So I'm just like speculating here. We are kind of built to understand causality. And the universe has causality.

So for us to create a product that bases itself on understanding and utilizing these kind of concepts is very easy because it is what we do by nature. So I think this is the most crucial thing. It is much easier than it might have sounded at the very beginning because people do understand and do appreciate the power of this.

So like controlling for spurious correlations. It's obvious for them. In retrospect though, not, maybe not in the very beginning, they may need a bit of convincing, but in retrospect, they say, why didn't we do that already? Like, uh, it's, it's such a powerful thing. 

Alex: That's great. What is one or two books that changed your life?

Athanasios Vlontzos: I've read, uh, to be honest, I have been reading a lot of books lately. Ever since the pandemic, I realized I haven't been reading that much and like, I restarted reading a lot. What are the two books that changed my life? One of them was, what was the name? Clifford A. Pickover. I read that a book of his back when I was, uh, in high school.

And it really, really struck with me because it was this book talking about time travel. And, um, it was, it was wonderful because at each chapter, it had this girl with her friend who was an alien trying to go back to back in time to meet Chopin. And on the other chapter that he was gonna, The author was explaining the various principles and the various, like, experiments that we all did in the previous one.

So, for sure, that really struck me in, like, a, a way. It's, it's sort of like flashback, to be honest, like going back to that book. But yeah, so that was definitely one that kind of like, I don't know how it affected me more than just like pick my curiosity about like things like physics, things like, again, I guess causality comes into play when you deal with like time travel, really, because like, can you be your own grandpa or something like that?

But yeah, this is definitely a book that stuck with me. And then more recently, I guess, I'm not going to go with the obvious kind of like book of why and stuff like this. Like this is too obvious for this podcast. I am a massive fan of Umberto Eco, the Italian author, pretty much anything that has like he has written.

I've read and like I loved mostly because he sets his books in a very historic environment and borrows heavily from actual history. Appreciate you tuning But then weaves in a very nice narrative and all of these kind of narratives have a bit of historical mystery. For example, like the name of the rose, you have like essentially a murder mystery set in like medieval times and really, really like gives you a very nice kind of like atmosphere.

Like I do like books and movies and, and songs that create a nice atmosphere. And yeah, the, I think the name of the rose or The course pendulum, both of them like, have been like, great, but these are like fiction. Like the other one was kind of more nonfiction. I think. Like, it was funny because like I was like, in kind of preparation to, to this podcast, I was listening to your older podcast, like, uh, to the other episodes, and I was, uh, listening to the one with Jacob and he mentioned like g the, the, the very classic book.

To be honest, I'm with you. I have tried to read that book multiple times. But I've failed considerably, like consistently, maybe because it's like 700 pages and like a very thick, but like it is such a great book, but it's also kind of difficult to read for me at least. For other people, they might like flow through it.

This is difficult. At the top of my list, I still have it in the Goodreads as currently reading. And it's something that I, I really want to do. 

Alex: Now you made me even more curious about this, about this book. Although I heard about this book from multiple times in my life, always from people who whose like sensitivity or knowledge, I really all appreciated them just as, as, as people.

So I think it's a recommendation that I. Cannot say no to. 

Athanasios Vlontzos: Yeah, I think, I think you should give it a chance at least. Yeah, 

Alex: I should. Yeah. Who knows where will this lead me? Why do you think causal discovery is a better riddle than causal inference? 

Athanasios Vlontzos: Well, because it's plain and simple, more difficult. It is more exciting because you're trying to figure out how things are causally connected.

Like causal inference makes some assumptions. The assumptions that we know the DAG, you know that X causes Y. Things like this and under the influence from some other factors, but these assumptions and this knowledge that we have is, first of all, not, doesn't span the entirety of the cosmos. We, there are a lot of things that we don't know why they happen.

I can give you an example afterwards of a apt example from my experience, and then we might be mistaken. We might be making things while all our data might be pointing one way, but at the end of the day, there's some like key thing that we have actually missed or it hasn't happened yet because it's rare that would change the way we're thinking of things.

And these are like, these are assumptions that like would challenge the DAGs, for example, that we have built. But causal discovery tries to identify these DAGs. So they're trying to figure out that actually X causes, Why from maybe from some like expert knowledge that kind of informs it, but at the end of the day, it is a very, very different task.

That's such an interesting one. For example, in the, let me give you two examples here. The one of a thing that we know we have good hints of what is going on, but we don't actually know. And that is the work that we did, for example, with at the at the center institute revolved around the Impurity, the Honesto sphere that causes problems in the bands that ZNSS works.

ZNSS. When I say ZNSS, think of GPS. The commercial name of ZNSS in the US is GPS In Europe is Tenco Perles, something like that. In Russia's or something that some something there. But ZNS is kind like the overall scientific name of it. These happen around the polls. And the science and the physics why they're happening, these impurities, is very well understood because like charged particles can come from solar wind, etc, etc, and they get trapped by the magnetic field of the Earth.

But they also happen around the equator. And that is a more troubling thing because we We have hints on why they are, and we can, you can talk to a like ionospheric physicist or a solar physicist and they will tell you more than I can, but like we are not a hundred percent certain why this happens. And this is an interesting one because there you need to discover a new part of physics, a new causal relationship that you don't know yet.

This is It's very exciting. From a more mundane, let's put it, example, you go to, again, the medical field. You're trying to understand and model a sensory radiologist going the very, like, beginning of this podcast about And the comment on Hinton and in order to like model this radio, you need to model the way you're thinking and you need to discover the, the causes and the links between the elements that they have in their minds.

And you, for example, look, the only actually you have really, you cannot probe their minds. You need to look at the. observational data you have, and these are the tests, for example, that they have ordered for the patient, and you might have things that are missing. That is not missing at random. There is a cause on why these are missing.

You need to identify what that cause is, and this is, in my opinion, truly Interesting and very, very, very challenging. Causal discovery, at the end of the day, it is, it is science. Either you want it or not, it is the process that we do science. We're trying to see if there is a causal link from X to Y or from Y to X.

We can try to make models out of it and try to formalize things. That's great. You can essentially have science derived from data. Wonderful. Sometimes it works, sometimes it doesn't. But like, causal discovery at the end of it, I find it more interesting because it's more difficult. And also it is true science.

The others, it's applied science, it's technology. But like, it's exploration, causal discovery. You're trying to find out something you don't know, and this is wonderful. It is, like, that essence of exploration is really, really captivating me. 

Alex: Touches the maybe a vulnerable place of of us humans because it leads us to the edge of of what we know and what we don't know.

Athanasios Vlontzos: Absolutely it leads us to the to the frontier to the final frontier like a kind of going back to star trek that you mentioned the beginning like star trek is a brilliant example of how. In popular culture, how we ask humans to think about discovery and exploration, because it is, it is a set of people, both humans and aliens, in a spaceship, flying away in the middle of nowhere, exploring new worlds and new civilizations, like, boldly go where no one has gone before.

It's like, it's a wonderful, it is the human element. It is, it is, innate in our nature, this essence of exploration. We had exploration by like, uh, when people kind of migrated away from Africa into the rest of the world, like thousands upon thousands of years ago, you had explorations when they would try to reach the highest peak or the lowest depth.

You had a, like a sense of exploration to like go up in the, the, the Arctic or the Antarctic and like kind of see what is there. It is this, and this is also going to be, This is also shown in this discovery of new science when like Galileo, I don't know, to be honest, I was taught that in school and I never actually check whether or not this is true or just like an anecdotal story that he was like in Pisa or in somewhere in Italy and he was like rolling balls and trying to figure out how fast they fall.

I think it is true, like, because I've seen it in multiple, like multiple kinds of things, but like. It's, it's also a story that's like 600 years old, so like, you never can be sure, but this is an essence of exploration, trying to understand the, why things happen, and how, like, in that case, how fast a ball would fall in a vacuum, things like this, if it, it is Our innate drive to understand the world, and this is perfectly encapsulated, for example, in causal discovery, because it is causal discovery at the day.

You might not use machine learning models back then, they might have been used, like they're using just like pure experimentation, but it is causal discovery in a sense. And this is why it is for sure more interesting, but also more, more difficult, way more difficult. Maybe the reason why I do causal inference is because it is easier.

It is like I can see, I don't need to wait. And test thoroughly hundreds upon thousands of possible causes. I can just do the inference and make assumptions and produce a very nice result and a very nice product. But, if you want blue sky research, yes for sure, causal discovery is the way to go. 

Alex: I really loved, uh, your description of What science is and all this vision you shared with us here.

Thank you. One thought that I have here is that perhaps today in this world of technology that is everywhere, um, it's easy to forget that we actually live in the world where we don't know many things. So you have Wikipedia at your fingertips. Everything around you, uh, depends where in the world you live and how lucky you were.

But if you live in, in a reasonably rich country, everything around you kind of works well every day. So this is an environment that is not inspiring us to ask questions and go behind. And not necessarily, it hides from us a lot of. It's the fact that we don't know so much, so many things. And I think this is a great example that you gave with, how did you call it?

Athanasios Vlontzos: GNSS. Yeah. 

Alex: Yeah. With GNSS or GPS and about the situation or the scenario around the equator that we, we are not able to explain it with as much certainty as, as we can explain the phenomena on the, on the polls. I think it's a great example because. We live in the world where we leave, where we believe most of us that physics is solved and like anything Regarding physics is just like trivial today.

Athanasios Vlontzos: Yeah, that is Really really far from the truth. Like yeah I to a certain to a certain extent I do agree To a certain extent, yes, maybe because everything, we have reached the point in civilization that things work fairly okay, lots of asterisks next to the fairly okay thing, but like, let's say, like, let's leave it to the technological level.

A lot of our technology works very nicely. We have achieved a level of technology that is, like, okay. Create comfort in our lives, but the thing is don't forget that this is for example What the Victorians might have thought as well like and the Victorian era technology to today is starkly different to a certain extent I agree.

However, there are still Loads thousands upon millions of people that still have this drive and to be honest when challenged people Are very inventive, and they are very much explorers at heart. Kind of depends on the person, of course, but we are like almost 8 billion of us. For sure there are some. And these are the people that are pushing the limits and trying to find new things.

You see them either in the arts, you see them in science and technology, you see them in politics and everywhere. We are trying to change the world to a collective better thing. And this is wonderful. It's wonderful. And, you, we might rest upon our laurels sometimes, for sure, but As if we average it out, to be honest, I don't think that we do think we are still pushing because we want to push it is built in our nature.

It is interesting. It is what kind of drives us. Everybody wants to create a better life for themselves. And this is essentially that materializes in different ways. And sometimes it materializes with. Significant advances in science and technology and these kind of things do come in weird ways just as I said before and like in the very beginning that just by thinking about a problem for example in a causal manner you have a lot of like interesting kind of offshoots and like side effects that that happen same with science technology we might be pushing towards one thing we for example loads of you Loads of progress happened through what we call big science, that is, for example, during the Second World War, there was a drive for, like, scientific research in order to, like, gain advantage in a wartime scenario, but that, that had a lot of other, like, side effects, for example, radar that we use to control aircraft, like, now, or In the space race, again, we had lots of like, they, we were like racing to who reaches the moon, but like, in the meantime, you had a lot of other advances.

I, if I'm not mistaken, aluminum foil came about like this, like I use constantly in my kitchen to like wrap like things that I put in the fridge. Or if you go down to even more blue sky races and more like fundamental physics races, and you have the LHC in. In CERN, in Geneva, offshoots of, uh, the work we have done and side effects of the work we have done in CERN as humanity, first of all, is the World Wide Web, is the Internet, like, part of the Internet.

It is all these super magnets and that we need to build better MRIs and, like, help people diagnose whatever they have. Science is, I think I mentioned that before, is not a linear thing. It is very much all over the place and it is. The essence of exploration that kind of drives us. We might rest in a lower sometimes, but even if you look and you try to push forward the limit of something that sounds mundane, at the end of the day, you might actually figure out something that nobody else has this unique solution that makes your product, for example, work better.

Oh, makes it more attractive, et cetera, et cetera. Still, these are contributions. This is still a sense of exploration. Either we don't need to like be pushing for the stars, all of us, but like as many as people as we can. Yes. So they will be amazing. Even the small contributions matter. There's no contribution small enough, not matter.

Everybody makes a valuable contribution to society. I strongly believe that. 

Alex: Who would you like to thank? 

Athanasios Vlontzos: Everybody. No, like the classic thing as any, any sort of, you know, how in the Oscars, they say a lot of things. It is Oscar season right now. I'd like a kind of award season in the arts. But yeah, like obviously my parents, like they, Both of them have influenced me in ways that, seen and unseen.

So like, my mother is a doctor, so she kind of like, uh, Posted me this kind of like, medical, like, Even through like, just discussing things, like, I got to know a lot of medicine through her. My dad, I think, definitely, Influenced me in the sense of the exploration, mostly because of all the Star Trek that we have watched together and he was talking about.

There's like, definitely build a small exploring, at least in like science and technology. And then all my friends and collaborators, to be honest, like Kieran here, the rest of the ACI lab, Bernhard and Daniel, my PhD supervisors, all the people that have collaborated over the years, I think. Science and technology is never one man's job, like, it is a collective effort of a lot of people that some of them might go unnamed, but their contributions remain.

Very valuable and very crucial and also like all the people that just go to the to the pub after work with because the best Ideas do come at the pub like in a in a relaxed kind of environment when like there's a free flowing like Free flowing ideas. This is where like the magic happens. So yeah very much everybody how I have ever interacted with I thank you for having me here everybody that I have interacted with because they have contributed to Everything that we have done together.

Alex: If there was one thing that you could solve in causality today, what would that be 

Athanasios Vlontzos: a way to create, even from observational data? Like I'm saying something that is kind of proven not to be, but like to be the case. But like, if I could, like, change the world, change the rules of nature in such a way that they could happen from observation data, get Accurate and meaningful bounds in, uh, like a virus, like counterfactuals.

I know this is not the case. I know this is impossible unless we put a lot of assumptions in and a lot of, like, constraints. But, like, if I were to change the world so it could be like that, yes, that would be amazing. That would simplify my life considerably. 

Alex: And assuming that we cannot change the, the fundamentals.

Something that if you could distribute like 1 billion for causal research, what would you invest in? 

Athanasios Vlontzos: A lot of causal discovery, because it will help the world greatly. And the other one is kind of like education, like in terms of like realizing what causality is, because causality, as I said, is embedded in our beings.

It is there, even if people might not causality, but like being in the world. Able to realize the space correlations being like skeptic about everything that we see around us will not only help science technology will help the world around us in general like society, because then we will like the burden of proof would kind of like rise.

So like we need to be more accurate and what we do more logical, but what we do without I don't want to be like Spock and like get, uh, get like, uh, emotion aside. We still need emotion and all that, but like we, this will help us think of the world in a way that will decrease, not minimize, decrease the amount of mistakes we do because we do make mistakes.

So these are the two things, like, a billion dollars, I will, 75 percent of that to causal discovery like efforts and applying causal discovery in every single imaginable field of science and technology, and then going to the education part. The other 25 percent goes to the education part. 

Alex: What question would you like to ask me?

Athanasios Vlontzos: Well, I was kind of thinking about that, to be honest. The, yeah, the, I know that you used to be in like a music producer and we are in a music company here. Like listening is everything is literally our motto. And so like the very easy question I have to ask for you, like to use, what is your favorite Zandra and your favorite like song that like, what is your Top song like in your Spotify 2023 wrapped.

Alex: I don't know if I can disclose this now, but I don't have Spotify on my phone. Sad reacts. But, uh, my favorite genre. Well, that's a very difficult one. But very broadly speaking, I like two broad streams in music. One is like EDM, like very energetic electronic music. I love the energy. I love the clarity there.

I love the emotion. It depends on the subgenre, but yeah. And the second one is like some kind of acoustic expressive music, which involves the stuff like jazz, like flamenco. Various kinds of like, you know, crossovers and, and so on and so on. So I, I really love emotional music. I think this is like, so 

Athanasios Vlontzos: like 

Alex: fundamental, 

Athanasios Vlontzos: you like, like fusion stuff, like a 

Alex: fusion style.

I like as well. Yeah. There's, there's an album by, by Herbie Hancock, which I forgot the title of the album, uh, but they are basically, he, he collaborates with a pop artists, the and jazz artists, and they are. Rearranging popular songs. So he's, Herbie Hancock is doing a lot of reharmonization. So he takes the melody and then changes the chords in, in ways that are like, maybe not completely obvious.

And so there's a, there's a remake of the song. The song goes on with Chaka Khan. Yeah. And that's insane. I love it. I love this one. It's super dynamic. It's also, I think there's also Anushka Shankar then on, on, on sitar. Here Chaka Khan is singing. Herbie Hancock on piano. And, um, yeah, and that's some more people.

Athanasios Vlontzos: So I, again, like a follow up thing. Yeah. Like, uh, yeah, I get the sense that you are kind of in agreement in the point of the interdisciplinary thing, fusion of ideas and fusion of like expertises creates nice things. You have like Herbie Hancock that comes from more jazz thing. You have Chaka Khan, you have like all this kind of thing going into one multiple and creating a beautiful song.

So I think this is a learning that we can take not only in music, but in life in general. Bringing different people and different ideas together always leads to something nice. 

Alex: Well, I very much agree with this. I think I never understood the term Puli may be the challenge that we as a humanity are facing with clashing tribes, which we see a lot today around the world.

Because I was maybe always curious about what is on the other side. And sometimes maybe because of different historic experiences, it might be more difficult to ask. And sometimes. Building those bridges might be, might be very challenging, especially when the trust is low. 

Athanasios Vlontzos: Yeah, yeah, for sure. But like, I think it is, we need to try.

Alex: Yeah, I agree with this. And I think this also goes back to what you said about your work here, that you talked about Lockheed Martin and those small teams. And those small teams are great, are building a great environment, built a high trust configuration. Where people can, even if they are different, they're coming from different tribes, let it be scientific tribes or different ethnicities, different life stories and so on.

They can build those bridges easier. And we have research in psychology showing that people doing stuff together is, is one of the best predictors of building bridges between two different, uh, tribes that might be even conflicted. So, I wish us all in science and outside science that we have more bridges and less walls.

Absolutely. 

Athanasios Vlontzos: Yeah. Funny enough, like anecdotal evidence, it was like almost a decade ago. There was an effort, I want to say from MIT or was it Caltech, that in physics. People were debating conflicting views on a subject. I think it was like a strength theory, something like that. And some of them thought it would be a good idea.

And actually, it was a brilliant idea to have the major proponents of one field and the other field debate each other. But by supporting the other person's point of view, and that created such a good understanding between them on actually what is each one is saying, and then that can lead to like more of an ease in the conflict, let's put it like this, like a bigger understanding, more collaborative, more collaboration, and it was like, it was beautiful.

And to be honest, like, strongly, we agree with you, like, we have to build more bridges and not walls. 

Alex: Pass before we close. For those, uh, in our audience who are interested in causality, would like to learn more, or maybe just are just starting with, uh, with this topic, what resources would you recommend to them?

Athanasios Vlontzos: Well, kind of depends on where they're coming from, where there's like the, what the background is, because other people learn by doing things. Other people learn by reading things like your book, for example, is a great resource. Perl's book is another great resource. Solkov's book is. Uh, another great resource.

All these are like textbooks are everywhere, like medium articles and stuff like this. If people are kind of, because in this day and age, a lot of people just prefer that. So like they can, there are blog posts, plenty doing that. The key thing I think is in my opinion, and the way I learned is, and this is in general how I learn is by doing.

Like I, Pick up a small kind of idea that you might have, a small little project, trying to see what would happen if you just did a very associational kind of point of view, or what will happen like if you had the causal point of view. And that will essentially give the experience to the person and the user.

Insight on why we do things like this and how we should be doing things. My true kind of recommendation is read up. Sure. Learn the basics, but then go out and try because this is how we learned, or at least I learned. 

Alex: Where can people connect with you and learn more about you and your research? 

Athanasios Vlontzos: So Spotify in general has a research blog that we post things that we make public there.

And more personal level and things outside of Spotify. Well, I am. active in most, if not all social platforms. So like, uh, X formerly known as Twitter, you can find me with my last name, Blondes. You can find me LinkedIn and I'll just drop me an email and let's grab a coffee. I'm, I'm in London. I'm based in London, which is a wonderful city.

And like, I'm always up for a pint or a coffee. 

Alex: Thank you so much. That was a wonderful conversation. 

Athanasios Vlontzos: Thank you for having, having, having. Thank you.