Ensemble Models Demystified
Ensemble Models Demystified

Abstract: 

Deep Learning is all the rage, but ensemble models are still in the game. With libraries such as the recent and performant LightGBM, the Kaggle superstar XGboost or the classic Random Forest from scikit-learn, ensemble models are a must-have in a data scientist’s toolbox. They’ve been proven to provide good performance on a wide range of problems, and are usually simpler to tune and interpret. This talk focuses on two of the most popular tree-based ensemble models. You will learn about Random Forest and Gradient Boosting, relying respectively on bagging and boosting. This talk will demonstrate how to apply these techniques on a real-world business problem in a live-coding session using the latest implementations available in the Python ecosystem.

Bio: 

Kevin is a data scientist at Cambridge Spark, a company providing data science trainings and consulting. Prior to that, he was leading development of data products for the energy sector and worked for the telecommunications industry at Qualcomm. Kevin has delivered data science and machine learning training courses to various clients from industries that include finance, engineering and research helping individuals leverage the latest techniques

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google