Abstract: Gradient boosting is a powerful machine-learning technique that achieves state-of-the-art results
in a variety of practical tasks. For a number of years, it has remained the primary method for
learning problems with heterogeneous features, noisy data, and complex dependencies: web search,
recommendation systems, weather forecasting, and many others.
CatBoost (http://catboost.yandex) is a popular open-source gradient boosting library with a whole set of advantages:
1. CatBoost is able to incorporate categorical features in your data (like music genre or city) with no additional preprocessing.
2. CatBoost has the fastest GPU and multi GPU training implementations of all the openly available gradient boosting libraries.
3. CatBoost predictions are 20-60 times faster then in other open-source gradient boosting libraries, which makes it possible to use CatBoost for latency-critical tasks.
4. CatBoost has a variety of tools to analyze your model.
This workshop will feature a comprehensive tutorial on using CatBoost library.
We will walk you through all the steps of building a good predictive model.
We will cover such topics as:
- Working with different types of features, numerical and categorical
- Working with inbalanced datasets
- Using cross-validation
- Understanding feature importances and explaining model predictions
- Tuning parameters of the model
- Speeding up the training
Bio: Stanislav Kirillov is the leading developer in the group of ML-platforms in Yandex where he develops machine learning tools, supporting and developing infrastructure for them.