Abstract: Forget specialized hardware. Get GPU-class performance on your commodity CPUs with compound sparsity and sparsity-aware inference execution.
This talk will demonstrate the power of compound sparsity for model compression and inference speedup for NLP and CV domains, with a special focus on the recently popular Large Language Models. The combination of structured + unstructured pruning (to 90%+ sparsity), quantization, and knowledge distillation can be used to create models that run an order of magnitude faster than their dense counterparts, without a noticeable drop in accuracy. This key enabler allows fast inference of modern neural networks on CPUs. The session participants will learn the theory behind compound sparsity, state-of-the-art techniques, and how to apply it in practice using the Neural Magic platform.
Bio: Konstantin Gulin is a Machine Learning Engineer at Neural Magic working on bringing sparse computation to the forefront of industry. With prior experience in applying machine learning to remote sensing (NASA) and space mission simulation (The Aerospace Corporation), he’s turned his focus to enabling effective model deployment in even the most constrained environments. He’s passionate about technology and ethical engineering and strives for the thoughtful advancement of AI.