Causal Bandits Podcast

Causal AI in Personalization | Dima Goldenberg Ep 19 | CausalBanditsPodcast.com

July 01, 2024 Alex Molak Season 1 Episode 19
Causal AI in Personalization | Dima Goldenberg Ep 19 | CausalBanditsPodcast.com
Causal Bandits Podcast
More Info
Causal Bandits Podcast
Causal AI in Personalization | Dima Goldenberg Ep 19 | CausalBanditsPodcast.com
Jul 01, 2024 Season 1 Episode 19
Alex Molak

Send us a text

Video version of this episode is available here

Causal personalization?

Dima did not love computers enough to forget about his passion for understanding people.

His work at Booking.com focuses on recommender systems and personalization, and their intersection with AB testing, constrained optimization and causal inference.

Dima's passion for building things started early in his childhood and continues up to this day, but recent events in his life also bring new opportunities to learn.

In the episode, we discuss:

  • What can we learn about human psychology from building causal recommender systems?
  • What it's like to work in a culture of radical experimentation?
  • Why you should not skip your operations research classes?


Ready to dive in?

About The Guest
Dima Goldenberg is a Senior Machine Learning Manager at Booking.com, Tel Aviv, where he leads machine learning efforts in recommendations and personalization utilizing uplift modeling. Dima obtained his MSc in Tel Aviv University and currently pursuing PhD on causal personalization at Ben Gurion University of the Negev. He led multiple conference workshops and tutorials on causali

Should we build the Causal Experts Network?

Share your thoughts in the survey

Tiny Expeditions - A Podcast about Genetics, DNA and Inheritance
Explore the exciting world of genetics in an easy-to-understand way with Tiny Expeditions.

Listen on: Apple Podcasts   Spotify

All Business. No Boundaries. The DHL Supply Chain Podcast

Welcome to All Business. No Boundaries, a collection of supply chain stories by DHL...

Listen on: Apple Podcasts   Spotify

Support the show

Causal Bandits Podcast
Causal AI || Causal Machine Learning || Causal Inference & Discovery
Web: https://causalbanditspodcast.com

Connect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/
Join Causal Python Weekly: https://causalpython.io
The Causal Book: https://amzn.to/3QhsRz4

Show Notes Transcript Chapter Markers

Send us a text

Video version of this episode is available here

Causal personalization?

Dima did not love computers enough to forget about his passion for understanding people.

His work at Booking.com focuses on recommender systems and personalization, and their intersection with AB testing, constrained optimization and causal inference.

Dima's passion for building things started early in his childhood and continues up to this day, but recent events in his life also bring new opportunities to learn.

In the episode, we discuss:

  • What can we learn about human psychology from building causal recommender systems?
  • What it's like to work in a culture of radical experimentation?
  • Why you should not skip your operations research classes?


Ready to dive in?

About The Guest
Dima Goldenberg is a Senior Machine Learning Manager at Booking.com, Tel Aviv, where he leads machine learning efforts in recommendations and personalization utilizing uplift modeling. Dima obtained his MSc in Tel Aviv University and currently pursuing PhD on causal personalization at Ben Gurion University of the Negev. He led multiple conference workshops and tutorials on causali

Should we build the Causal Experts Network?

Share your thoughts in the survey

Tiny Expeditions - A Podcast about Genetics, DNA and Inheritance
Explore the exciting world of genetics in an easy-to-understand way with Tiny Expeditions.

Listen on: Apple Podcasts   Spotify

All Business. No Boundaries. The DHL Supply Chain Podcast

Welcome to All Business. No Boundaries, a collection of supply chain stories by DHL...

Listen on: Apple Podcasts   Spotify

Support the show

Causal Bandits Podcast
Causal AI || Causal Machine Learning || Causal Inference & Discovery
Web: https://causalbanditspodcast.com

Connect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/
Join Causal Python Weekly: https://causalpython.io
The Causal Book: https://amzn.to/3QhsRz4


Dima Goldenberg: If I recommend you to stay in Paris for three nights, you'll get the price of three nights, which is quite expensive. If I recommend you to stay for one night You'll see a price that much cheaper and maybe will encourage you to continue looking for properties. It's quite costly to test the models online.

Alex: You said that your team Experiments with everything runs a b test even for back fixes. How was your experience when you joined this team? 

Dima Goldenberg: So I must say it was quite shocking for me. 

Alex: What was the most? surprising Finding about human psychology that you encountered in your work.

Marcus: Hey, causal bandits, welcome to the causal bandits podcast, the best podcast on causality and machine learning on the internet.

Jessie: Today, we're going back to Tel Aviv to meet our guest. As a child, he teamed up with his grandfather to design his own version of monopoly. He always loved to build, but didn't want to become a purely technical person. That's why he enjoys working on personalization and recommendations for real humans.

Senior machine learning manager. Ladies and gentlemen, please welcome Mr. Dima Goldenberg. Let me pass it to your host, Alex Molak. 

Alex: Welcome to the podcast, Dima. 

Dima Goldenberg: Thanks, Alex. Hi. 

Alex: How are you today? 

Dima Goldenberg: Good. I think it was a very interesting day today. I had a lot of meetings as usual. Also some conversation with you.

And, uh, we're right, uh, before the holidays here, so, uh, try to get a lot of content at the same day and, uh, meet as many people as possible. 

Alex: When we traditionally learn about recommender systems in machine learning courses, those recommender systems are presented as predictive devices or associative devices.

In your work, your and your team's work, you merge the idea of a recommender system with causal inference. What can causal inference bring to recommendations? 

Dima Goldenberg: I think, first of all, if you take the story in the right sequence, the fact of trying to do personalization and introduce personalized recommendations to the customers and to the users is a very key concept in any e commerce platform.

And, uh, you can think of this of like, Hey, I can build a system that can predict what's the most suitable item, what is most suitable. A recommendation for you and the idea behind it is that if I'm using classical correlation based methodologies, I can come up with a better recommender, get a better accuracy, get a better recommender systems metrics like accuracy top K or something like this.

But then you face the reality. You need to find out the application itself. So many times what we do is test all of this application, basically any change on the website. But in particular, any machine learning solution and recommenders. In an AB test and try to see how it changes key metrics that we care about.

So we're trying this on a high scale. And, uh, many times you learn that although you built a really good recommender that can improve the accuracy of the previous benchmark and, uh, have a find the most suitable item for each of the customers and the users, it might not move the needle in a causal way, because what you could do is just move traffic from one part of the website to another.

Change the behavior of the customers or maybe just make something that is evident for them. So technically I just recommended you the same destination that you plan to go in any case. So why would it have any change and incremental impact on the goal? So that make us think about how we shift the behavior in an incremental way.

How do we introduce a change treatment to the website or to the experience of the customer in such a way that they would do something different from what they would do if we didn't do that. And to be honest, that's not something that's super intuitive to think what would happen if I didn't do that.

And especially if you're working on recommenders, You have so many alternative options of what would I do different. What you need to think about is how do I change the behavior, what's going to be incremental here, and what's going to make customers change their behavior and maybe react differently.

And in this case, maybe what I want to do is not necessarily to find the most popular, the most frequent solution in the database that's gonna fit the customer needs, but actually what's going to be innovative and change. So it starts with the fact that, uh, maybe I want to set up the recommendation problem in a different way.

I want to find maybe instead of popular destination, something that's trending, something that's gonna incentivize the customer to change their behavior, or if I even want to frame it in a different way. So I would say the role of the recommender, why you're even doing this presentation on the website, plays a lot of role.

How do you frame it in terms of user experience, the copywriting, in terms of timing, and how do you place it, and many other things that actually you think about them as something that's going to be incremental, change the behavior of the customers, and not just predict the things that you already know.

And many times you do see this. Causal effect of this expected correlation that by the fact that I'm servicing up the most relevant items, if it's a destination, if it's a hotels or anything else, if I'm finding something that's more relevant to the users. It's going to be easier for them to find what they were looking for and make this, uh, incremental, uh, reservations on our platform.

But at the same time, many times, just by the fact that you're relevant might be not enough to change something in the customer behavior. 

Alex: So you're talking here about thinking in terms of counterfactual outcomes rather than just predicting something that is likely for a given user to their behavior on the destination that they would like.

Dima Goldenberg: Exactly. Where you need to focus on something that actually going to. Change the outcome between A and B, because I think, like, we're doing a lot of A B tests. We try to run hundreds or even thousands of A B tests in parallel sometimes, and trying any change that we do on the website, starting from copywriting and UX to bug fixes and obviously any machine learning models or something like that.

So you need to build this, uh, change in a way that you're expecting to change, to see significant difference in the metrics. And it's also quite harder when you keep improving and have this, uh, quite optimized. So I would say you need to find the things that's actually moving the needle a lot. I would say that, uh, as we expand into new markets, new types of products, et cetera, we also found out that in travel people.

Like, uh, a lot driven by price differences. So anything related to discounting, promotions, uh, coupons and things like that, that's actually something that changed a lot the behavior of the customer. And it also became one of the biggest strategies that we have in terms of let's try to offer more value to the customers by maybe even funding some of the travel options and giving discounts, which is going to change the behavior.

And we've seen that in many cases, changing the effect in the price, giving discounts could be much more beneficial, much better to incentivize customer to book rather than being just relevant. So this is a really good opportunity to extend your customer base and increase the volumes. Um, but obviously it comes with a price, right?

You can just give discounts to everyone. You're going to lose money. So you need to play this optimization game and to understand when it's beneficial and you need to give the discount or maybe what discount to give. And when not, and here comes the, the counterfactual problem, exactly like the point that you raised of many times you might say, okay, let's just give discounts to the customers that are less likely to book, right?

We want to convince people, so let's focus on the weakest segment that we have. That's the main gap of understanding that people don't see. That instead of just finding what is the segment that you need to give because they're less likely to book or something like this, you need to understand what would happen.

Okay. With and without the treatment and obviously with the discounts as well. And I think it's, it's actually a very common problem in causal inference or causal learning to understand what would happen if I give a discount, what would happen if I didn't give the discount and to figure out if I'm changing the behavior of the customer or not, because optimization wise, from the perspective of the marketer, if the customer would book anyway.

You probably don't need to give a discount. You want to make the original outcome of the problem, and you want to give the discounts only in a case where you're changing the behavior of the customer in a positive way. Obviously, it's also not such a dichotomic problem. Many times it's just probabilistic and changing probabilities, but you still want to measure this effect.

I think maybe what's different in the kind of discounts we're talking about compared to other promotions that you might have is also the Structure of the cost, because in many cases, when you go for a marketing campaign or something like this, the fact that you're targeting a specific population or the fact that you're merchandising a discount, you might have a cost associated just with the fact that you're doing the treatment, for example, sending a physical letter to your customer or paying per click in some campaigns.

In this case, we have more like a setup of a trigger cost. So if, uh, the customer actually likes your offer and books, then the cost is triggered. And then you have to pay the cost, but you also have the, the booking. And if the customer want book, whether it's because of the fact that you showed the incentive, which is quite real because, but it's sometimes happened that, uh, promotions and discount can confuse customers and basically scale them away from the platform.

Or whether it's because they never had an intention to book in the first place, which is actually quite common, meaning the conversion rate is quite low. I'm not even talking about, you know, bot traffic and some automatic sessions that are not expected to generate any bookings at all. In these cases, you don't have the cost.

So mainly focusing on the cost factor off the customers that booked and then you need to give the discount and you associate it together and making sure that was beneficial to offer. 

Alex: That's also related to another topic that is present your in your work, your in your team's work that you are operating under some fixed budget or some budget constraints, maybe say talking more general. 

Dima Goldenberg: Yes. So in a sense, because the promotions are associated with cost, we can't just give the discounts to everyone. We need to control it in some way to make sure that we're not just constantly losing money. But as a growth strategy of the company, we still want to utilize as much budget as possible.

So the budget that was either given from the beginning, or maybe even something that self funding campaign, something that can be incremental. to give the discounts to as much customers as possible. And in a sense, it's an interesting win win game where you can have a self funded campaign and by this give more value to the customers and actually get more discounts from that.

But because you have this constraint, you have a quite interesting problem that may be not very common in the industry, where most of the industry would just try to optimize the profit of the incremental profit of your campaign. In our case, we many times looking about incremental volumes, how much new customers you bring, how much new reservations you make, while at the same time you control for the incremental cost.

So I don't want my budget to exceed the predefined budget, or I don't want the return to investment to be below what A specific threshold, and in this case, if you have a spell budget, extra budget, you actually want to reinvest it and to get much more incrementality. So it creates this kind of like two dimensional incremental game, where in one side it's very clear, right?

You have the binary outcome of, um, let's give a discount and see if customer books or not. And on the other hand, you have outcome of the promotion in terms of cost. It could be, it could lose you money because you could just give a discount that if you wouldn't get it, you would make more money. Okay. It could also bring you more money and be incremental in revenue because the fact that you gave a discount made people spend much more on your platform, and it's incremental.

So, by selecting and understanding how to allocate this, uh, in a clever way, you have this optimization problem where each session, each interaction, you have incremental value and incremental cost. And it actually brings you back to, like, very fundamental optimization problem of knapsack problem. Which items should I pick?

Which items I can't pick? And how can I fit within the capacity budget constraint? So it's also actually a very nice mixture and combination between causal inference, which is one field, to an optimization, which is completely different field, and they come together in a very nice way. 

Alex: That's great. And I think, I think that's very interesting.

What were the main challenges in your journey to combine this, this optimization perspective with the causal inference perspective and the recommendation perspective?


Dima Goldenberg: So it's actually quite funny because like I used to work in each of these fields separately. I learned a lot of causal inference when I first joined the company and did a lot of A B tests and worked a lot on incremental measurements, etc.

I used to do a lot of optimization work. Well, even in my studies, I had a lot of like combinatorical, uh, uh, optimization, uh, background and, uh, obviously machine learning and recommendations. But looking at the fact that I need to solve some problems and start to use each of these components, uh, separately or suddenly all together.

That's what was, uh, bringing it in a really nice, uh, symbiosis and where you can interact and use all of them together. And the apply part is quite different from the theoretical optimal solution. Because if you know the knapsack problem, you pretty much know that, hey, it's NP hard. It needs a solver to solve it.

And we don't have a good solution for that. At the same time, you have approximations. So you're just using this fractional approximation of like how much value divided by how much weight. And you consult. Uh, your outcomes, then on the causal inference, you learn a lot of papers and research of how to do uplift modeling, how to basically find the right segments to target, et cetera.

In reality, it's harder because the data is super noisy, you have a lot of overfitting, you have seasonality, and you're trying to bring it all together to make it work for the business outcomes and to make impact on where you actually needed and to fit the business needs and not just solve the hypothetical theoretical problem.

I think in this point, it's also. Very interesting that from one point you're working in a dynamic environment where basically you can deploy the changes fast and see the reaction of the customers, how the, uh, and the business itself. And at the same time, you learn that the problem evolves because usually when you kick off with like, Hey, I want to sell as much as possible within this budget.

As a starting point, you figure out that once you, uh, deploy this solution to production, even if you get the exactly right answer of the exact right problem you got from the product people, you suddenly see new constraints of like, Hey, I want this segment to be more dominant in this, uh, results. We have some, uh, fluctuations in the, um, seasonality or something like that.

Maybe you want to deploy this solution on more platform or products. So the problem is evolving all the time. And you not necessarily solve it by introducing new variables to the mathematical formulations. Sometimes you just solve it again and again and again in a different setup. Sometimes you're not looking for the optimal solution, but maybe for the most robust solution that will, uh, be helpful and reusable.

across different scenarios. And, uh, many times you just need to monitor the outcomes and to make sure that you're ready and prepared for these changes, because they will happen. You don't know how and when, but you need to make sure that you are ready for that. I would say maybe one key difference in experimentation about discounts, promotions and anything related to pricing compared to, let's say, your exchanges or even deployments of other systems.

is the fact that the environment is very dynamic. So making a change and then just like ship and forget is not really an option. Many times you need to monitor it, you need to compare it to in holdout. So that's quite a common practice to see over time what would happen if I didn't give this discount compared to the fact what would happen if I give it in certain way, one way or another.

And to see how this evolves over time, how it changed because the business evolves, lots of moving parts around. I just mentioned that we have Tons of experiments happening in parallel, even the dynamic of the promotion itself might change. And I think this is something that's very important in any applied machine learning, but I think in this case specifically, to connect to the understanding the customer.

Because first of all, we're doing it independent assumptions that we have from the original data are not really the case when we're dealing with real users, real problems. And we need to control for that. I would say that, uh, we not always know how to control for that. So, uh, I think the best thing, the first thing that you need to do is to acknowledge them to understand that you have these gaps.

Many times we have, uh, really nice models and offline evaluation. We build a cool uplift model, have really nice skinny curves, measuring the best, uh, performing models. And then when we deploy them in production, we see some problem. And, um, the more understanding you have, what makes the difference and miss parity or parity.

Between online and offline, the better you can improve your models offline, because it's quite costly to test the models online. Eventually, we're giving discounts, so if you're doing it in the wrong way, You might get, uh, miss optimal results, so it's better to test and train these things as much as possible 

offline.

Alex: What are some of the most useful strategies or the strategies that you find the most useful when it comes to Evaluating models online and learning from this online evaluation in order to translate it to something new in the offline evaluation. 

Dima Goldenberg: Whoa. Uh, so that's a lot. I think, uh, the first step, and that's actually quite a boring type of work, but very important and very necessary is to rely on.

Two fundamental things. One of them is to make sure you having the same metric on and offline. And while it sounds trivial because it's coming from a different sources and timing, et cetera, it could be have quite different definitions. That's one. And the second one is the same with the data. Make sure that your data collection process for the online and offline processes is the same and you don't have some fundamental biases or gaps between the two.

And just by that, you're making sure that there's alignment between online and offline. And I would say the best way to test it is kind of like Make sure that you have a very simple baseline offline running offline and have this number and make sure that you're able to get exactly the same measurement when you're running it offline.

And if you're not, that's also fine, but you need to understand how much is this gap and that you can predict and expect it in the future iterations. Now, besides that, what we try to do is kind of like to do continuous experimentation. So it's not enough to test some specific treatment in a specific period of time, but actually we're trying to run it continuously to see that the gap between the no treatment and treatment or one treatment to another, uh, stays the same.

And if not, we try to react to that. So this is another solution. And many times what we're also trying to do is to build a bit more robust solution in a sense of portfolio port. So we're not necessarily have one treatment that is the optimal one and meets the exact thresholds of what we set with the business, but actually several different strategies, sometimes very diverse, sometimes coming from the same, uh, from the same methodology.

And by that, you kind of like ensuring that you always have at least one, two working methods. So if something breaks, you can still rely on, uh, one or another. I will say that eventually also something that we've seen working a lot is, uh, the fact that you need to stick to simple models. It's really nice to go like to tons of features, very complex models, and, uh, have really nice, uh, benchmarking online.

But then when you deploy it, first of all, you have a lot of problems with the deployment, then the more features, the most possible moving parts you have, the more the harder it's to maintain it. And also it makes this like a problem of overfitting much more complex when you're moving on, because, uh, basically.

Any new variable that you add might have drift, might have seasonality, might affect differently on the outcome eventually. 

Alex: There's a lot of, there are many moving parts here. And the more variables you add, the more moving parts, adding more moving parts you risk. You said that your team experiments with everything, runs A B tests, even for bug fixes.

How was your experience when you joined this team and you met this culture for the first time. 

Dima Goldenberg: So I must say it was quite shocking for me. And, uh, I joined this like experimentation culture where it's, uh, everything is data driven and, uh, just show me the data and you'll get the chance to deploy your solution.

And, uh, everybody's right. If you have the right data to confirm it, but it also creates a nice, uh, evolutional game, I would say, where, uh, the best solution, the one that proves to be better over time is the one that survives and evolves. It's also quite intimidating because you have lots of, as I mentioned before, lots of moving parts, lots of teams that are working on similar or different directions, and you need them to converge together.

Sometimes it even creates some parts that could look the same and could even compete with each other on the platform. And then again, you can many times see with the data which one is better. And I think, um, over time, also with the involvement of my team that is part of a central department that tries to build these solutions for, uh, optimization across the business.

So for different, uh, use cases for different problems. It's to see which parts are useful and repetitive between different solutions and to deploy them either as common methodology or maybe as a common platform and give them as a service to the rest of the teams. And eventually it's also encourage a culture of collaboration because you see somebody solve the problem better than you do.

You learn why, how, which features work, how do you do the experimentation and you evolve with this over time. I think in general, what we have at Booking is A lot of causal inference, causal learning problems, where we're trying to understand what changes, what, uh, strategies would work better than the others.

And, uh, we have this, uh, opportunity to share the knowledge with the rest of the community. We have quite a big, uh, group of scientists, data scientists, machine learning scientists working on this problem. So. Coming down with the things that work together for for other teams, different teams and collaborate on these solutions helps eventually to improve all the different products across the business.

Alex: You mentioned a lot. Talking to business, what are the main lessons or main insights that you could share with the community regarding communication with business stakeholders as a person who is representing a technical team? 

Dima Goldenberg: So I think that we started that from the beginning, from pretty much the first question of what's, uh, what is recommender system?

And I think you need to understand what is the expected impact of your work. Think many times when you work in machine learning. Especially, by the way, not in the causal world, but let's say in many other domains, like even, uh, content, NLP, vision, maybe even in recommenders, many times as machine learning scientists, what you're trying to do is to optimize the accuracy of your algorithm, right?

If you're doing, for example, image recognition, and you're trying to recognize object in the picture, you're trying to improve the accuracy of your, uh, of your classification model, but what's really important and how it's going to be used. Where it's going to be deployed. How? What is the effect? So, for example, if you have a really good model that can recognize toilets in the picture, that's nice.

You can get to 99 percent accuracy. But if you don't know what is the use case and how it's going to be used, it's going to be really hard for you to move the needle. And I think this is a key in everything we do in terms of deploying machine learning, connect it with the actual product need. That also means that we have quite a Mix and interactive teams.

So you always have a product manager assigned to your team, trying to basically not just take the models and deploy them to be the most accurate, but actually change and move the needle in the important business needs and connect this model. So if the toilet model, recognizing toilets could be useful for you to maybe onboard new small properties on the platform that didn't tag all the amenities they have in a good way.

Then it's a good win for the business. And you also need to understand this in a scale of like how important this change is going to be. Many times you might have a good progress in machine learning, which has a little to do with the incremental value to the business, to the company, to the customer. I think that's also a very important part because you're talking about, you know, The business, but eventually, uh, as a marketplace, who the people who benefit from, uh, from the outcomes are either people who come to book, uh, on our website and the customers that they enjoy from this experience, or people who put services on our website, like hotels or other stuff.

So you need to think about it from the perspective of like what they benefit from this, which is nice because it correlates with the business needs. It correlates a lot with the fact that we just want more deals to happen. And we assume that everybody's happy once, uh, Somebody finds the right place to stay.

Alex: What would be your advice to people who would like to improve, technical people who would like to improve their communication with stakeholders, specifically people working in causal inference? So, so people trying to understand the mechanism from the technical point of view. 

Dima Goldenberg: I think not to fall in love with the, with the machine learning work, with the, with the cool technical stuff, because, uh, I would say I've seen a lot of cases where actually the simplest solution, the simplest baseline solution, the, you know, deploying some average result, deploying some, not one number, magic number, something like that could have a huge impact on the, on the results, on the customer behavior of what should happen, etc.

And many times when we're trying to Uh, train models, get into fancy stuff, get into deep learning, et cetera, which maybe we should do this, but not at the first step, but the first step, we really need to understand, uh, and even get into this like position of like, what would I do if I didn't know any machine learning at all?

And I actually can give you a cool example that's related to causal inference, but, uh, take it from a completely different perspective. So I mentioned, That we did a lot of recommendations of destinations of where you should go. And, uh, one of the sub problems of this problem that we tried to solve is, let's say we recommended you to go to Paris for how many nights we should recommend you.

Because eventually we need to generate a link that when you click it, it's going to get you to, to the place that, hey, let's book a property in Paris. And we had a really good model that sees the distribution of how many, uh, nights people book at Paris. By the way, it's at least two, three, like people don't stay usually one night there.

And, uh, we've seen there's like popular time of, uh, that people spend on, uh, in Ferris. And we try to deploy the solution for each city, maybe even contextual. We can say for how many nights we recommend to stay there. And then we've seen that it didn't help the customers. It didn't make people book more.

It didn't have people to navigate through the page. And basically, it's not the right answer. Although we see that in, in our use case and in what we've seen the data, that's the right answer. We need to recommend many nights. And, uh, one of the things we realized is that it has a lot of side effects, a lot of other potential things that could affect, uh, people's decisions.

I'll give you an example. If I recommend you to stay in Paris for three nights, you'll get the price of three nights, which is quite expensive. If I recommend you to stay for one night, you'll see a price that's much cheaper and maybe will encourage you to continue looking for properties. The same, by the way, goes with availability.

There's much more, uh, properties available for one night than for three nights. Just because of availability concerns. So, in reality, what we see many times is just by showing one night might be much more beneficial because it keeps people to stay to keep to dig more into into the experience and not necessarily the most accurate answer is the right one.

And, uh, that was quite funny realization because we work on this project, I think, for quite a long and we figure out that there's like the simple answer was always there under, uh, under our nose. 

Alex: That's in a sense, beauty of causality, right? We can come with a hypothesis and then we try to see if our hypothesis works well with the real world and we get an answer from the real world.

And in this case, the answer was no. 

Dima Goldenberg: Yes. And I think in this case, causality is also not only about, you know, discovering dynamics of like a complex world of like action and reaction. It also has a lot to do with human psychology. Not necessarily rational, right? Cause probably the right amount of nights to stay in Paris is 2 or 3.

But actually, what makes people change their mind, what makes people to get to the desired action. And by the way, it's not that easy to define what is the desired action. Obviously, when you're trying to sell hotels or something like this, probably you want people to book more. But in some cases, it's not that trivial, especially if you're working on other areas of the platform, like customer service or anything else.

So to understand what drives people to move to all the desired action, Uh, many times you need to involve, first of all, interviews with the customers, uh, UX, user research around that. And, but sometimes they don't even know that you're themself. So they make these decisions. Irrational wouldn't confess that that's the reason why they making this choice.

And you just see it in the data. So it's also the beauty of running experiments and just, uh, uncovering what happens there. Sometimes you feel like you hack the mind of the people and you're like, Whoa, so that's what happens. That's actually the, the patterns, the behavior that happened. And many times it's counterintuitive.

And the fact that you can connect between people, behavior, uh, from one side and the data and the results from another side, I think it's also one of the most exciting parts of my work. 

Alex: What was the most surprising? Finding about human psychology that you encountered in your work. 

Dima Goldenberg: Ooh, it's going to be hard to point at one particular thing. 

I tried many times to test different, you know, cognitive, uh, I would say urban legends of like, of what happens like with, I don't know, choice overload or, uh, price comparisons and other stuff with, uh, users, I would say like without even pointing a specific example, I would say that this is consistent.

And, uh, you see it a lot of, like, even running a very successful and significant experiment on one setup, one platform. Kind of like, you know, you have this, like, excitement of, like, hey, I finally proved it that maybe I should show it in this way, present the user experience in, like, in red color, in blue color, whatever.

And then you're trying to repeat exactly the same experiment on a different platform. Let's say it worked for me really well on like on laptop. Now I want to make sure that it works the same way on the apps. So you're super confident, you deploy exactly the same experiment, changing, let's say, the color of the button or like, I don't know, adding a pop up.

And you just see completely inverse results. And you figure out that, hey, wait a second. So it's not that consistent. Or maybe you see that people from different countries react differently. Or maybe you sometimes see that just the same person half a year later could, uh, have a different effect. So I think, actually, I would say the most surprising fact about, uh, human psychology, that it's not super consistent.

It's really hard to predict. And while you do learn patterns, you still need to make sure that you validate it a lot. And I think, um, at the beginning, I was really confident with lots of claims that like A causes B and like, you know, uh, trying also to sell these ideas to other product teams of like, yeah, you should do this thing.

I think the more I work on this like problems, the more I kind of like become humble and understanding of like, wait a second, the fact that it worked for us, not necessarily going to mean that it's going to work once again, if you repeat this, it could be about using a specific machine learning methodology.

It could be about measurement, it could be about changing, uh, UX, uh, differences or anything. Like, and, but that's the beauty of like, that we have the opportunity to measure the results. So you could always say, maybe it's going to behave like this. Let's test. And I think this is kind of like my go to approach.

Like we have some hypothesis, we want to try it. We have some, Hinge of like, we want to have this thing working this way, but the final solution, the final result that you're going to measure, it's going to be on the actual, uh, causal A B test. That's going to show you that, yeah, you were right. That's works.

This one didn't work. 

Alex: It also shows us the limitations of A B tests, right? That the external validity is not something that we can take for granted in terms that if in one environment we have shown the effect, it doesn't automatically translate. to another environment or another population. 

Dima Goldenberg: Yes, I think when you read papers, especially like again, let's go to psychology papers that come to conclusions that I don't know that people from a specific race would prefer a specific color or specific gender would do that and not that.

You always like, you know, open up like, Hey, who, who participate in this particular experiment when it was done by whom? What is the background of the people? Because yes, maybe this is the result that you've seen a significant, maybe for the specific people that you ran this experiment, it was actually significant and different.

But if you want to replicate this result, many times you will face different problems, whether if even if I'm trying to test exactly the same thing, but I'm doing a small tweak and, uh, not in the way that the recipe said, and, uh, you need to validate it again and again, which again, I would say that in, uh, in business environment, when trying to improve, uh, Your platform You need to keep evolving and you need to keep monitoring the changes because even if it works really well for you a couple of years ago, it doesn't say anything about how it's going to work now.

Alex: For people working with causal inference, whether it be experiments or observational data or mixed observational and experimental data, it can be sometimes challenging. Especially if they're new to the field, to get used to this idea of thinking in counterfactuals, comparing those counterfactual outcomes.

It seems that It might violate some of our natural inclinations or natural intuitions about how to think about causality. What's your experience with this? 

Dima Goldenberg: So first I would say it's, it's also hard for people who experience in the field. Like, uh, me and my team, we're working on, uh, this like uplift modeling, causal inference for, for the last three, four, five years.

And still many times we face this dilemma of like, Wait, we never thought about what would happen if we didn't do that. And, uh, I think this kind of thinking is the requires you an extra level of like of cognitive load of understanding that there's a completely different scenario that I could do completely different actions and the outcomes could be different.

And, uh, you can fall into this when you're trying to build a treatment and to understand that, like, Hey, I'm not looking at the causal effects. I'm looking at just like specific thing. But from another side, you also never have these two scenarios at the same time, right? You never know what would happen if I do A and if I do B at the same time, you don't know what would happen if I give the coupon and don't give the coupon in the same time.

So you never know what would be the actual outcome. So kind of like randomized A B test helps you to do that, but still you don't have this particular specific person, specific, uh, change happen with the same, uh, people twice. And it's also hard, you know, when you're modeling it. So even if you get into the side of like, let's evaluate the models, let's understand is the model is good or bad, you don't have tool labels.

It's makes the explainability of the model harder. It's makes the monitoring of the model harder. It makes the benchmarking of the model harder because You don't really know what would be the optimal action. You do get it sometimes when you get into simulations. This is a game we like to do of like, hey, let's just simulate a world when you have the outcomes of both scenarios and to understand which one would be better.

But this is a very much like, you know, extensive thinking and kind of like our way to come to a solution that it's hard for us to grasp. Now, if you take it to, you know, to normal people then don't get to use with it. Causal inference all day, uh, counterfactual or even harder, like if I'm looking at different decisions that I had to do back in my life, like some, I don't know, some dilemmas that I faced, like whether I should study that or that.

You don't really build this scenario of like, Hey, like, uh, there's like this turning doors that I'm like, I'm going to go to scenario A and end up like that and go to scenario B and end up like that. So it's really hard to understand what would happen. If you didn't do this and you face it like, but once you grasp it, you can see it everywhere, right?

You can understand it in, uh, when you're getting into supermarkets and you see like some deals and you understand, wait, so they give this discount and now they can sell much more. They actually make me, uh, buy some, I don't know, some cheese that I never planned to buy if they didn't have this discount.

Or you can see it in public policies of, uh, basically some incentives, some changes in the policies that make people change the behavior. And, uh, I don't know, pay more taxes even in some cases or buy apartments a little because they know the tax is coming. So you see a lot of things of what would happen if you do something differently and you see these policies, uh, but it's.

Very easy to be like, you know, smart and retrospective to see people doing, uh, changing the behavior and moving into a different direction. You're not necessarily predict this behavior in advance, especially if you're the one that makes the policies. 

Alex: Yeah, I think that there's so, um, there's so much to talk about when it comes to human psychology and, and.

I would like to take a step back now and get back to the point where we started our conversation. So go back to this idea of causal recommender systems.

If you could give a short introduction to those people in our audience who are not familiar with your work. What kind of ideas or what kind of technological building blocks have you used in your work on those recommended systems? 

Dima Goldenberg: So I would split it because like there's the department of causal recommender systems, which I would say could be another podcast and we can talk about it a lot.

I think that what we're working on is a bit more simplified version of it, of at least how we call it uplift modeling. It's also sometimes referred as a heterogeneous treatment effect. of basically trying to find the conditional average treatment effect of some treatment. So you have a set of treatments.

It might be just two treatments like discount give or not give. It could be multiple treatments like what should I give? And then you can get closer to recommender systems. And basically when you're trying to solve this problem, you need to understand which outcome is better and better is also depends on which, um, which metrics do you define.

Sometimes if you have just one metric, it's quite easy. You just measure which one is better. If you have multiple metrics, then maybe you need to do a multi objective optimization. Maybe you need to do some kind of like Pareto tradeoffs between the different tools. So we start, and the nice part is that we almost always start with, uh, randomized data.

Technically, you could do also a lot of causal inference and causal recommendations from observational data. And it happens a lot, especially when you're doing, like, things like ranking, recommendation. You always have biased data unless you're doing full randomization, but even then you might have some bias So if you have biased data, you need to de bias it in some techniques either inverse propensity weighting or some other techniques and approaches, but you need to come into the Understanding of how do I de bias Different problems that I have in my data If you're starting from scratch and you have let's say just two treatments Uh, it might be easier because you're starting with a randomized data set.

So you have, uh, treatment A and outcome of treatment A, treatment B and outcome of treatment B. And then you can measure, first of all, the average treatment effect. So was treatment A better than B or the opposite? Let's say we're offering discounts. Many times it happens that people prefer discounts and like them.

So they book more, they have a better outcome with the discount. And then we get into the problem that We might have too many discounts that we gave away, we need to balance it and control it some way to fit into the budget. So, by the way, without this constraint, it would be an easy trivial solution, which some, uh, people don't think about.

Like, sometimes it might be okay just to give the treatment to everyone, and everybody's gonna be happy with that. We don't necessarily need to start again with the modeling, maybe the trivial solution, Uh, could be good enough. So then you get into another problem that you can't or shouldn't give the treatment to everyone.

You need to balance it. So first of all, it could happen that not everybody has a positive outcome of the treatment, right? With discount, it's quite, uh, straightforward that people prefer a better price. They prefer a discount. So many times the discount will have positive effects. Sometimes it doesn't, especially if the discounts offers you a product that you never planned, and then it just takes you completely off the plans that you try to try to do, but in most cases it is.

And then if you could give the discount to everyone, that's good. But if you can't, then you need to understand who is the most suitable segment for this, how you can optimize it. And by the way, the segment doesn't necessarily have to be usual segment. It could be anything else. It could be based on other properties.

And then you need to find a way that says what would happen if I give the discount and what would happen if I didn't give the discount, which is the causal modeling, eventually the uplift modeling. One of the ways we're doing that is by using a technique that was developed at Booking, Retrospective Estimation, basically training an uplist model just from the converted data.

And with that, we could understand how we could, who should get the discount and who's not. And we find that it eventually gives you some kind of SOTU. Okay, so basically, what is the potential impact, potential outcome of each individual? And by that, again, if I can give as many as possible, I would. So I need to find what is the right threshold, when to stop, and what is the segments that are going to be affected and what are the segments that are not.

That's done usually with causal measurements like Kini curves, when I'm trying to understand what is the optimal threshold point. And with that, we usually go to production and see if our policy is right. And I think the fact of tuning the offline outcome and comparing it to what's happening online, that's a, let's say, a separate type of expertise, I would say, that you need to, like, learn and iterate and many times learn from mistakes and from applications that could be different from case to case, from company to company.

But I would say that the key for that is just to see what happens and react to the data. 

Alex: As a kid, you like to build stuff. How does early experiences in building stuff, designing a game with your grandfather How does this impact the way you see your work today? 

Dima Goldenberg: So that's interesting. I think that, uh, you know, we take this anecdote of one thing that I did is like, uh, I've seen other kids have the Monopoly game, uh, and, uh, didn't find where to buy it.

Even it wasn't even a problem of like, can't, can't afford it. I just didn't find. So like, we decided it was my grandpa to, to build this game ourselves. And, uh, We had the first version of this. I went to the backyard to play with kids. We were, uh, I don't know why we didn't play with the ball. We just sat and played with that.

But, uh, okay. And, uh, I found it too easy. Like, I played with the original Monopoly. I remember how, like, how fascinating it was. And this one was too easy. So I guess I came back. I think it was like, I was also going to my grandpa. So it was like a month after month after month. And we decided that we want to make it a bit more complex, a bit more rules and bit more constraints.

And, uh, the game became more interesting. We also seen that some mechanism doesn't work. Some of the design could, uh, could work differently. It wasn't necessarily about economy or like, you know, other stuff of monopoly, just how fun it is. And, um, I think if you're trying to find the analogy of the work we're doing today, I would say.

Again, you try to solve a very complex problem, okay, with lots of moving parts, uh, sometimes big budgets, lots of customers. You should start with a simple solution. And it's definitely not going to be the optimal solution. I think that's the biggest gap between, like, you know, academia and industry. You need something that is better than the current one.

You try it, you learn from this, and you start to introduce more and more complexities. For example, when you're doing. Promotions allocations or discounts, you can start with like, you know, the simple understanding of like, can I even give this promotion? So like, just run a test and see what happens. See if it's beneficial, if it's losing money, if customers react well for this.

Again, many times the problem or the key is even in the UX and how you communicate it. And let's assume it works and you want to allocate it, optimize it. You would start with like simple lever, like give or not give. So when I should give it, when I shouldn't give it, then you can make it a bit more complex and get into the point where you can have another lever, maybe which promotion to give, maybe I can give different discount levels.

Maybe I can offer promotions on different products. You can make it even more complex of when to give it or in which part of the user experience. Uh, maybe I should give it at the beginning of the journey. Maybe I should give it at the end of the journey. Maybe we should do this as a incentive that comes later.

So the timing could play a role. Uh, and you can make it more and more complex by introducing more and more levers. Like even on which items. Maybe I shouldn't give the discount to all of the items. Maybe I should give it only to some. And the optimization problem becomes very complex. Sometimes you solve them independently and hope that it's just gonna work.

Sometimes you're trying to build a complex system that's gonna take into account some of these moving parts together. Most of the time it doesn't work, but you still have some approximate solution of how it works. So I would say that seeing how this mechanism evolves and also seeing how the reaction in the feedback is something very interesting in like in designing such systems.

Alex: It sounds like you. We're engaged with this game for grandfather in a really long term project. It was not something that happened over a day or two. 

Dima Goldenberg: Yeah, that's because like, you know, the friends that I had over there, it's like it was in a different city. So I was like coming over for a weekend or for like for some vacations and like playing with like, uh, other set of friends and like, you know, and then I had like a break of like of a week, a month or two that like they were in between.

So every time I came back, it was like, you know, completely new game. Yeah. Yeah, I wouldn't say that this like, you know, this part was, uh, very much, you know, life shaping, etc. But if I try to remember, like, these games of Monopolis, I remember that we had several iterations of that. 

Alex: What keeps you going today?

Dima Goldenberg: Oh, that's interesting. First of all, I think it's, uh It's just enjoying your, like, your environment, life, work, whatever, like here at Booking, for example, I'm almost seven years, seven years here, and I really enjoy the people, the environment, the individuals that I interact with, the team that I have. Like people in my team, people in teams that I interact with, and, uh, you know, just beside the fact that they're super smart, uh, I also just enjoy, like, you know, spending time with them.

So I think that's really, that's something that drives me a lot. And, uh, I would say that, uh, on the professional level, is that, uh, you have a lot of opportunities to try stuff, to keep learning, to see how it works. To see how things react and kind of like keep exploring, keep, uh, learning new stuff and, um, I think particular, I'm quite lucky that I have the opportunity to test a different experience on very big scale and see the world of data, how it reacts and really see big volumes of things that, uh, moving and changing.

And in particular, like making this impact on, uh, Lots of travelers seeing it's like changing a lot of stuff. And even like, you know, we're talking about, uh, something that I'm using on my day to day as well. Like, you know, my parents planning, uh, uh, trip, they also using the same tools. They see my work. So I think it's really cool that you see this like almost feedback loop of like of the work that you're doing and, uh, and the feedback that you get from either the immediate, you know, metrics and the work, or even from your, uh, friends.

Alex: What would be your advice to people who are just starting with causality, causal inference, or machine learning in general? I 

Dima Goldenberg: think I repeated it a couple of times, but first of all, keep it simple. Try to understand what are the fundamental things that are shaping the problem. First of all, understand the problem that you're trying to solve and only then pick up the tools that you need to solve because, uh, I mean, I'm using, I'm working on this quite a lot, and I'm still making this mistake again and again and again.

Luckily, I have enough good people, smart people around me that don't need to be overcomplicated, but really try to keep it simple and, uh, yeah, and try to come up with like simple solutions. Don't be ashamed of this. I think this is also something that's like very typical to people who starting in machine learning.

They almost embarrassed from simple solutions because they feel like they didn't really accomplished, uh, what they were hired for. And, uh, I would say that it also depends, I think, from company to company, from different environment to another environment, maybe different things are appreciated. But in this case, try it like, especially if you're working on a business environment that looking for like, uh, impact, try to understand what actually moves the needle, what actually changed and important, and, uh, Uh, then navigate your effort towards that and not necessarily into fancy cool stuff, but actually to the things that work.

Alex: Yeah. Like today, the deep learning culture promotes jumping into architectures without understanding the problem often. 

Dima Goldenberg: I think that even then, you know, like deep learning is not a bad thing. It's like, it's super useful and, uh, it has a lot of cool solution. Even like, you know, I would say the recent advancement of generative AI, et cetera.

But then again, you need to understand how it's useful for you. So for example, How do you connect a causal inference and generative AI? It's super untrivial. You could get into the part where like, hey, I want chat GPT to just tell me, should I give a discount or not? But obviously it can't just train on my data and understand how it works, at least for now.

So maybe what you can do is get some context features, maybe get something that you couldn't get from, like, you know, unstructured data, introduces to your model, so you can find some interaction with that, but you also need to be super smart. I would say that. When you're saying architecture, I would say that most of the complex problems that you need to solve is not actually about How to design your deep learning network or neural network.

Many times you just have pre trained networks that are doing a really good job. What you actually need to do is to understand how do you plug a lot of different parts that are responsible for different tasks together to solve the ultimate task that you have in your problem. And maybe even connecting back to causal inference, I think that one of the most fundamental things in machine learning that you need to understand before doing any modeling, et cetera, is the evaluation.

You need to make sure that you evaluate the right metric, that it's consistent and correlated what you're trying to solve, that it's useful across the different applications, that it's Easy to understand for all your stakeholders, and when you're cracking the evaluation in the right way, it's quite straightforward to understand what you need to improve, and then you only have the problem of how.

Alex: What are two books that changed your life? 

Dima Goldenberg: So it's a tricky question, uh, I'm not too much into books to be honest. I like a lot of popular science. I think, like, I told you that, uh, back when I was in, uh, university and studies, I organized a lot of, uh, popular science talks and actually was exposed to a lot of super smart people that were talking about their books.

But I barely read those books. I think one, maybe two books that actually helps to shape a lot of like of the things that interest me is, uh, yeah. Related to, let's say, uh, cognitive science and rationality. So one of them is, uh, uh, Predictably Irrational, I think, by Dan Ariely. And then after reading that, I went to the, you know, to the hardcore foundation of that, to Thinking Fast, Thinking Slow by Kahneman.

And that just passed away like a month ago. So, yeah, that was quite a big change to, you know, to, uh, science in general, and maybe to also my view at, uh, again, connection, like of people behavior, and then what you can learn maybe from data. I took a couple courses to connect again between like my engineering data science background to A bit more how people making decisions, how, uh, a behavioral scientists and, uh, even my research was almost always around this topics of like marketing and, uh, like seeing what makes human behavior change.

And again, understanding this pattern and just understanding the fact that you can, Really find repetitive patterns in human mind, whether they're rational or irrational, because I think also rational or irrational are concepts that are coming much more from economy and optimization rather than psychology.

And, uh, figure out that that's how it works. And it's repeating from people, from person to person, between people. That's quite cool. And I think, you know, when you work in the field of machine learning AI, You see this coming a lot when you're training models, right? You like just models are eventually outcome of what they're exposed to.

So what they see in the data, that's what's going to happen. I'm a fresh dad, I have a six weeks old child and an older one, so I also have some experience with that. And you actually also, I don't know, maybe that's a professional disformation, but you also see this in kids, right? They just exposed to examples to data and they react and they doing mistakes and they learn from this and you see how they shape their understanding of the world, right?

They see objects moving and first not understanding what that and then they understand that that their own hand and can respond to that and now they understand they can control it and then they understand that maybe they can't move themselves, but can point in some objects and tell you as a parent what to do.

And, uh, the fact that they're crying is actually the optimal treatment to get your attention. So, that's how they, they basically Optimize their goal, right, of getting your attention, of getting some love from you, by crying. I think that you actually see this, that like, they're trying different stuff, and this is the one that converts the most.

So, it's quite fascinating to see that, like, that's how human mind works, and, this is quite the thing that we're trying to teach machines to do. Who would you like to thank? To thank? Who? That's a long list, I don't know. Yeah, um, look, I think, uh, I'll start with my family, and, uh, Uh, you know, I, like, I learned a lot about this and understand this a lot.

On the go, but, uh, like I moved here to Israel when I was 10 and my parents kind of like left behind everything. They had quite nice careers and, uh, lots of opportunities and immigrated, moved to a different country with like, kind of like new start. And I think I never felt that like, I felt like they invested a lot into me.

And, uh, this is something that I understand in retrospective, not necessarily something that I felt back then. Lots of my education, lots of many different opportunities, so lots of thanks to them. Probably grandparents as well, I just told you this story about building a game with grandpa, I never thought that it would be such a life shaping experience.

So I would say that, uh, No, the older you get, the more thankful you are for your family. Yeah, I think, uh, and professionally, I would say that I'm really inspired by my colleagues, like whether it's my teammates, people that report to me and like give me new challenges, new opportunities, new learnings every day.

People that I have fun with and just, you know, enjoy to spend time and, uh, also other people that there's lots of like learning that you can make from that. So I would say that I would split it into many different people that shape, uh, my experience, obviously my new family, the kids that, uh, help me understand, uh, better the world, obviously my wife that, uh, supports me in all of that.

So. Lots of people from also lots of different perspective. I think I really like the fact that many different experience From the past have this like connecting dots sometimes in retrospective so like you just exposed to some ideas by people and you never understand why it's useful for you and then You know, like few years later, you tend get to use this particular thing that you learned from that and heard from this person.

You might not understand that to like a particular thing that you learn from one person at certain point of time could be really useful and converges many others learning and knowledge that you see from others to people and other experiences. And it's, it's shaping you a lot. I, I like to look at everything that I learned, every, uh, learning opportunity that I had, every.

Course that I take during my degree is something that I found useful later on, on professional career. I know that many students don't see it. Maybe don't believe that, Hey, well, like, why am I studying this? Like, I don't know, uh, operational research course. It's never useful for me in my like data science work.

And then like, Oh, like seven years later, you're doing knapsack problem solution and like, uh, and things like that, or, you know, any other experiences that could be sound very stupid, very simple. But then, like, it might be helpful, even if not directly, just indirectly, knowing this idea, seeing it somewhere, connecting the dots, and makes you better.

Alex: What question would you like to ask me? 

Dima Goldenberg: I noticed that, like, you know, you're investing a lot into, like, connecting with people and, uh, Uh, kind of like, you know, doing all this podcast and all this work, uh, with, uh, kind of like, I would say a mission behind it of, again, connecting a lot of, uh, ideas and opportunities together.

And I would say that what would be the nicer, the greatest outcome that you would expect from just like, you know, meeting all these people? 

Alex: Oh, that's a beautiful question. I think I already have outcomes that are beyond what I imagined when I was starting with all of this with the book and, uh, and the podcast.

Um, I had wonderful opportunities and I learned so much recording this podcast that I never thought it would be possible. But If you ask me about something like the biggest, biggest goal, that would be creating a platform that leads to a resource, maybe a book, maybe a series of books that bring together different perspectives regarding causality.

So perspectives from continuous optimization, reinforcement learning, operational research, experimentation. Uh, formal, causal formalisms like Perl's formalism and so on and so on. Because I must tell you that I see when I talk to people in different sub areas of causality that sometimes those people are trying to solve problems that are already solved in another sub niche, in another little ghetto.

And now they don't even know. That other people were working on this, uh, this a similar problem because there is a citation gap in the literature and nobody never cited the person from this other stream. 

Dima Goldenberg: I noticed this a lot, especially like, uh, you know, even if you're jumping from like one, uh, discipline from another.

So like, let's say you have economics, you have, uh, and you have healthcare. They might have even like different, uh, terminology for the same things. I think, uh, at some point we had one employee that like had a lot of experience with like heterogeneous treatment effect work, but never heard about uplift modeling.

And I think it took us a week to understand. We're talking about the same things. We're just like, we just. They're relying on completely different literature because they're just like citing two different sources and basically working on the same things, but with different formulation, with different terminology, but trying to solve pretty much the same problems.

And actually, I think like he even wrote this like paper bridging the gap between heterogeneous treatment effect and uplift modeling to kind of like show how similar they are. Personally, I had this experience of like talking about what I do with people from healthcare. And, um, while many of the things that we do sound very similar, you still see that the problems we're trying to solve are very different.

So if I'm trying to find the optimal policy, they sometimes trying to look for the most robust policies. And we're using exactly the same tools. We're using exactly the same approach to the problem, but we call it in different names. We value it in different names. And I see a lot of opportunities of that converging.

And I also connect to the point that you mentioned that like people don't know that the problem is solved somewhere else. Sometimes we don't even notice this, like, you know, within the company, we solve a problem. We're super proud of our solutions. We kind of like showcase the solution. And then we hear that other teams just tried the same thing like two years ago, and it already works.

And you have this like, kind of like, aha. So maybe we should communicate more. And I think it's, it's a lot about like, yeah, building community, talking about this and. Even aligning the terminology. 

Alex: Yeah, yeah, and I, I'm, I'm a firm believer in, you know, in, in, you know, building more bridges between those ghettos.

I think we have wonderful opportunities awaiting us there, um, regarding many, many problems that we are facing today. Tima, where can people learn more about you, your team, and your work? 

Dima Goldenberg: So, luckily, we had quite a lot of publication in this field, specifically in, like, uplift modeling, causal inference, uh, as booking.

com and causal inference A B testing is quite a very strong tool that we also talk about and uplift modeling specifically. So we had a couple of recent tutorials in C I K M and in, uh, W W W. Uh, we have a couple of papers published in recent years. Multiple talks presenting this problem. And in general, we really like to talk to others about how we solve these problems because we learn a lot also from the industry and how it works.

I've seen that the causal machine learning community is also evolving a lot. There's lots of workshop in different conferences. There are lots of Python packages of how to do that. Sometimes they're doing exactly the same and it's not really clear what's the difference between them, but you have a huge group of people to collaborate on this.

So I think if you just look up for some content that we released in booking at causal inference uplift modeling, you'll see a lot. And also personally I got so excited about this that I Actually, after working on all of this, I started a PhD and started to research that. So, I keep working on, uh, creating new knowledge on this topic.

Alex: That's really amazing. What's your message to the Codal Python community? Whew! 

Dima Goldenberg: Share learnings, your results, your approach, and, uh, don't just think that you're stuck somewhere. Like, talk to others and see if they have a better solution or some solution to your problems. 

Alex: Dima, that was a wonderful conversation.

Thank you. 

Dima Goldenberg: Thank you.

(Cont.) Causal AI in Personalization | Dima Goldenberg Ep 19 | CausalBanditsPodcast.com
(Cont.) Causal AI in Personalization | Dima Goldenberg Ep 19 | CausalBanditsPodcast.com
(Cont.) Causal AI in Personalization | Dima Goldenberg Ep 19 | CausalBanditsPodcast.com
(Cont.) Causal AI in Personalization | Dima Goldenberg Ep 19 | CausalBanditsPodcast.com