Faculty of Mathematics, Physics
and Informatics
Comenius University Bratislava

Doctoral colloquium - Filip Kerák (10.11.2025)

Monday 10.11.2025 at 13:10 hod., Lecture room I 9


05. 11. 2025 14.57 hod.
By: Damas Gruska

Filip Kerák:
Shrinking the giants - Sparse Neural Networks a necessity of the future


Abstract:
How to live long enough to understand the ultimate question of life universe and everything... Modern neural networks often comprise billions of parameters, resulting in substantial time and computational demands during both training and inference. While these large-scale models have achieved remarkable success across a wide range of tasks, their size and resource requirements pose significant challenges for broader adoption and sustainability.

A promising solution lies in reducing the size of these models. Strategies like lowering precision or quantization can only by applied to a certain point, therefore minimizing the number of parameters without sacrificing accuracy is a growing next step. One effective approach is to introduce sparsity during training. By selectively retaining only the most essential connections, sparse training techniques can produce models that are significantly smaller and faster, yet still highly capable.

More information

Youtube channel