Abstract: Deep Learning is a watershed in software development, not just for the capability of the software but by the public availability of powerful pre-trained models. While a community gain, this presents a challenge for managers and executives, of exactly how to gauge progress and manage deep learning deployments. Where does one separate opensource development, from in house tuning and refinement? The seniority model, also fails, if both junior and senior developers use the same existing models. Auditing variable choices and tuning variables is difficult and not exact; reading through code or judging based on speed performance is not always practical. We present a case study of effective team management and organization for deep learning for image processing. We examine the lessons learned and the resulting organic structure in which, research, engineering and deployment are slightly siloed but allowed to communicate after a “threshold of frustration” has been reached. Such a model allows for agile development without losing control or quantifiability of the work done.
Bio: Vadim Pinskiy is the VP of Research and Development at Nanotronics, where he oversees product development, short term R&D and long term development of AI platforms. Vadim completed his doctorate work in Neuroscience, focused on mouse neuroanatomy using high throughput whole slide imaging and advanced tracing techniques. Prior to that, completed Masters in Biomedical Engineering from Cornell and Bachelor's and Master’s in Electrical and Biomedical from Stevens Institute of Technology. Vadim is interested in applying advanced AI methods and systems to solving practical problems in biological and product manufacturing.