Few Shot Learning for Leprosy


Motivation: Leprosy screening in a resource poor setting is a major challenge because its patho-physiological presentation and the public health infrastructure related challenges. A mobile phone-based Leprosy screening tool will be of great help for public health care workers for public screening drives.

Data: A non-interventional study was designed to collect Leprosy and Leprosy like non-Leprosy images. In this study, 396 unique lesion images were captures from eight skin conditions including Leprosy. Each of these lesions were captures in four imaging modes, which are a function of imaging hardware, and imaging optics like optical resolution. The data is skewed and imbalanced with very few images in non-Leprosy classes and a large number from Leprosy disease. To mitigate the data imbalance, the Leprosy lesions are sub-grouped into nine sub-classes based on lesion morphology. This makes total number of classes 16 with which data looked almost like long tailed distribution.

Methods: A metric based meta-learning using Siamese network was experimented using encoder architectures proposed in Prototypical networks paper with a contrastive loss.
Experiments and Results: The fine-grained data classes were spilt into train and test sets with 10 classes in train set and six classes in test set. The accuracy on train and set for 160 two-way one-shot task is 90.63% and 75%. An accuracy around 24-28% is achieved on base line nearest-neighbor estimator on same tasks.

Contributions: In extreme low data scenarios with few classes, a few shot learning based approach might be challenging to train and evaluate. In these scenarios of low data and low classes with extreme skewness, a fine-grained sub-grouping into finer classes can help stretch the data to more classes simulating natural long tailed distribution. In case of skin disease classification tasks, this sub-grouping is more realistic as each disease can have different morphological presentations like Leprosy in this case.


Ramakanteswara is a medtech innovator/Biodesigner with primary research expertise in medical robotics, computational medical imaging, machine learning/AI and HFID (human factor and industrial design). He worked and developed medical technologies for therapy in both robotic surgery and interventional medicine as well as diagnosis using imaging, sensing and molecular methods.
Ramakanteswara started medtech journey with a low-cost vein visualizer at Stanford-India Biodesign at AIIMS. As a research fellow in Robotic surgery at IRCAD and went on developing an augmented reality wearable device for sub-surface visualization. He also worked on navigation system for hip-replacement and camera-projector technology for spinal surgery. Ramakanteswara has total eight years of experience in medtech industry starting with Bosch to build medical technologies in diagnostics space as a strategy lead for R&D and innovation for their new venture in healthcare. Ramakanteswara developed multiple devices in ophthalmic imaging & diagnostics and in molecular imaging and diagnostics. He developed multiple AI algorithms for Diabetes Retinopathy and dry eye. He later moved to Boston Scientific working on interventional devices in cardiology, GI endoscopic procedures and urology. He led as specialist for clinical insights, technology and innovation. He worked in setting up computational biomechanics and Human Factors labs at Boston Scientific. Presently working with Novartis as an Innovation lead building medical technologies that can be companion devices with drugs and digital technologies for drug development.
He is a trained physician with medical graduation (MBBS) from Andhra medical College and did engineering (MD equivalent) from IIT Kharagpur. He has around 10 patents applied or granted in medtech space.

Open Data Science




Open Data Science
One Broadway
Cambridge, MA 02142

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from - Youtube
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google