Small to Big Data and Deep Learning
Small to Big Data and Deep Learning


Deep learning has pushed the accuracy of image classification (among other applications) to near-human performance. The two most important factors that have allowed this revolution are the availability of large amounts of data and the use of GPUs. During the last few years, an approach following the principle of ‘more data, more GPUs have been widely applied. This approach though has several limitations. The first one is that more GPUs are not always available. A classic scenario arises when there is the unavailability of internet connection, prohibiting the use of cloud-based resources. The second problem is that, in some cases, there is not enough data for a given problem. In this talk I will review some of these limitations, exploring potential alternatives to the ‘more data, more GPUs’ viewpoint.


Eduard Vazquez received his PhD in Computer Vision from Universitat Autonoma de Barcelona (2010), where he was lecturer of Artificial Intelligence and Expert Systems. His main research topics cover the study of colour and perception, segmentation, medical imaging and object recognition. Eduard is currently Head of Research at Cortexica Vision Systems.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from - Youtube
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google