Blog

Kaggle Days Paris — Winning Solution

1 33rqsyh4ajm5daqqimz eq

Kaggle Days is (almost) officially the most interesting event to meet, learn and compete against the most talented data scientists worldwide. And this is how we won it.

The city of Paris hosted this January (2019) the 2nd ever Kaggle Days event. More than 200 data scientists from all around the world gathered to learn, share knowledge and eventually compete against each other in a 11 hours in-class Kaggle competition that took place during the conference. This blog post describes our solution to the competition, that won us the 3rd place.


Predicting Sales of LMVH’s Luxurious Products

Challengers were provided with data from the first seven days of Louis Vuitton products after their launch on www.louisvuitton.com. The goal was to forecast sales on each of the three months following the launch. Additional data included product descriptions, sales statistics, social media, website navigation, and image data.

1 jipqvudjqxbgjjy6wzdbqq
Screen capture from louisvuitton.com

Strategy

Like in many of our past Kaggle competitions we followed simple tactics:

  1. Dumb features complex models — build simple features and let the ML models “understand” the complex interactions between them.
  2. Embrace diversity — train many models and ensemble best leaderboard performing models with the least overlap in their errors.
  3. Teamwork — each team member has a well defined responsibility: the Feature-Extractor, the Modeler and the Integrator.
1 fw0autgwmmkk7je1sqaoma
Team members: Gad Benram, Seffi Cohen and Nurit C Inger

Baseline

Before even looking at the data we stood on the shoulders of giants and used Kaggle Days’ team baseline script that included numerous features. We ran a simple Random Forest model of no more that 100 trees on the entire train data, just to see the importance of each feature. The results came as an absolute surprise as the log of the first 7 days’ sales received X20 larger importance score compared to all other features. This piece of information allowed us to control the variance of the generated models to some extent.

1 lkjmofa1wsnlvana0u xaw

Formulating the business question

Diving into the data, we discovered that someone had made a good job splitting the it. The training data consisted of ~1700 unique products while the test included almost the same amount different products. It made us think of the problem as follows: assuming a specific product, let’s say a women bracelet, was sold x times on the first week after it’s launch and some m1, m2, m3 on the consecutive months, How much will be sold given a similar item?

1 fapuoqs v c lqh4yoqvqa

Ignoring trends — since the data was relative to the first week of launch and didn’t include the Gregorian month we neglected seasonal trends paid less attention to social media trends. Basically it came down to the question of product similarity. It’s worth mentioning that Mikel Bober-Irizar managed to estimate the relative reduction in sales during the test period, thus improving the score via technique we call MLE — Maximum Leaderboard Estimation.

How to model product similarity

  1. Categories — the products were categorized by human experts to different types such as leather goods and daily bags. These features were extremely valuable in understanding the similarity between different products’ sales. Not only that these features reflect how a human would perceive these items in a passive way but also actively affect the sales because they determine the position of the item on the website and the way it is presented to customers.
  2. Text — charechtirazing the feature using its text by applying TF-IDF and PCA over the features.
  3. Image vectors — the image features were extracted by using Google’s Vision API. This API utilizes neural networks trained on very generic datasets like ImageNet — not at all useful for distinguishing between two luxurious casual hand bags. Hence, we ignored these features.
  4. Click-through rate— Data from the websites traffic included page views and item clicks. from this we extracted the average clickthrough rate and used it as a feature.

Models

Since the data included valuable categorical features, it was important to use a library that can leverage categorical features in an efficient way. XGBoost doesn’t have a built-in categorical features mechanism vs LighGBM and Catboost. Catboost took too long to process so we ended up running LighGBM with GBDT and Random Forest as boosting methods. Neural networks failed to converge due to the relatively small amount of data.

So, why do we say it’s overfitting?

Assuming we were consulting LMVH on how to increase sales, we would have applied a somewhat different approach. For example, LMHV could perform A/B testing on the most important features like the categorization or textual descriptions of the products to examine whether augmenting them would affect sales. Nevertheless, when striving to win a Kaggle competition our only goal is to best fit our predictions to the test set.

Machines and environments

All code ran on a preemptable Google Cloud Deep Learning VM with a n1-highmem-8 (8 vCPUs, 52GB RAM, 0.1$ USD / Hr) instance. The machine was located in us-central area, thus it had high availability during competition hours, since we were in Europe.

Experience

While Kaggle is an amazing platform to share knowledge and challenge yourself in very diverse and difficult competitions, Kaggle days adds an important layer to this community: in-person interaction. We had a fantastic time meeting fellow data scientist from all around the world and learn that we all share a common dream: Overfit the test data.

1 33rqsyh4ajm5daqqimz eq
The team working during the competition

Notebook code here on Github | Kaggle profile: Gad Benram

Subscribe to updates, news and more.

Related blogs