Abstract: Neural networks dominate the modern machine learning landscape, but their training and performance still suffer from sensitivity to the empirical choice of training task hyperparameters. The aim of automated machine learning (AutoML) techniques is to automate and optimize the population of training tasks during the empirical parameter search, in order to maximise performance under limited computation resources. Current attempts for AutoML are focused on configurations common to deep learning models, such as architecture, loss function, learning rate and optimization algorithm. Nevertheless , the training data and its quality are considered constant, which is in contrast to their importance in determining the quality of the trained model. In this work, we propose an integrated approach to AutoML which includes, in addition to the off-the-shelf hyperparameter optimizer, parameterization over the metadata population selected for training.
Bio: Coming Soon!
Deep Learning Research Scientist | allegro.ai
deep-learning-w19 | intermediate-w19 | machine-learning-w19 | talks-w19