Abstract: The recent success of deep structured learning in areas as image and speech recognition, and problem solving in general, has made many people wonder if neural networks based artificial intelligence has surpassed the capabilities of humans intelligence in these specific applications. Others, however, argue that the more advanced network topologies that have emerged in the recent years are merely more sophisticated statistical techniques for fitting functions. In this session we'll make a review of where deep learning currently stands, what are the key present limitations and challenges, and how neuroscience and psychology can bring us closer to human-level intelligence.
Deep learning has made a rapid progress in the recent year, successfully solving a myriad of challenges in the area of artificial intelligence. Neural networks, although inspired by the human brain, do not try to model it and we tend to think about them as mathematical abstractions. The complexity of the recently presented neural architectures keeps growing, and reasoning about a trained neural network is becoming extremely difficult.
Although deep networks have demonstrated a substantial advantage in solving problems like image, object, and speech recognition, the way they handle such problems is very different compared to how humans would approach such tasks. It has been shown that people build causal models to reason about outcomes, in contrast to neural networks that treat most problems as pattern recognition. Humans also rely on intuitive physics to augment their decisions, and generally posses much better mechanisms for knowledge transfer.
This session will start with a definition of deep learning, and will show several examples where deep neural networks have performed exceptionally well. It will also highlight some areas where they continue to struggle, and will cover the most recent problems and challenges of this family of machine learning methods.
The key question that we will try to address in the session is can we enhance the capabilities of deep learning by making them more biologically plausible. Neuroscience and psychology have informed research in artificial neural network on more than one occasion, and have made fundamental contributions to the area of artificial intelligence. We will discuss how cognitive and neural inspiration can enhance current models. We will present more biologically plausible learning rules. We will talk about incorporating causal models (e.g. evolving causal neural networks) and propositional logic (neural tensor networks) into neural networks to simulate prior knowledge. We will talk about current research and controversies in the field of intuitive physics, and we will describe networks that rely on intuitive physics to provide zero-shot (task-to-task) knowledge transfer.
Please note that this is not a product oriented talk. It is rather a review of the current state of deep learning with emphasis on specific problems, and the main goal of the talk is to give different perspective to neural networks – one that does not consider them just a function approximators. Our hope is that by incorporating some of the techniques outlined above, deep learning will be able to solve a larger range of problems and develop human-like generalization abilities.
deep learning, neuroscience, psychology
Bio: Nikolay Manchev is a Data Scientist in the EMEA CDS team at IBM and a research student at King's College London. He specializes in Machine Learning and Data Science. He is a speaker, blogger, author of numerous articles, and member of the advisory board of the Spark Technology Center. For the last four years Nikolay has been working exclusively in the big data space with focus on custom machine learning algorithms and large scale data processing. He has an M.Sc. in Software Technologies, M.Sc. in Data Science (City, University of London), and runs the London Machine Learning Study Group meetup.