Abstract: Deep Learning is an incredibly powerful technique, which has found uses in wide range of applications such as image object detection, speech translation, natural language processing and time series modeling. However, training deep neural network models requires a tremendous amount of time, training data and compute resources. A technique called transfer learning allows data scientists to increase their productivity dramatically by sharing neural network architectures and model weights. Reuse of a pre-trained model on a different but related task enables training of deep neural networks with comparatively less data. In this talk, you will learn the details of how transfer learning works and will see demonstrations in both financial and healthcare domains. We will talk about specific use cases and lessons learned that are applicable to many other industry sectors.
Bio: Steve is a Data Science Solutions Architect on the IBM Analytics team covering Healthcare, State/Local Government. Steve works with clients to understand their big data and analytics goals and helps design data driven solutions to fit their needs. When not engaged with clients, Steve can be found keeping up with all the latest breakthroughs in data science. Steve achieved the "Kaggle Competitions Expert" designation by his high performance in Kaggle Machine Learning Competitions and received his Bachelor of Science in Computer Science and Applied Mathematics from the University at Albany.