Abstract |
During the last decades, Deep Learning (DL) architectures have steadily improved state-of-the-art results in a large corpus of tasks in the machine learning field. Although unknown for many practitioners, one important reason for this behavior is the property of equivariance, which has been implicitly (some times even unaware) hard-coded in new upcoming structures (e.g. Convolutional Neural Networks). In this talk we will cover the basics of equivariance and propose a method to detect and overcome undesirable situations that appear when utilizing equivariant networks on non-fully equivariant datasets and to (hopefully) utilize computational resources more efficiently. Furthermore, we will open a brief discussion about the possible inclusion of equivariance in other DL architectures and for other machine learning tasks. |