Modern machine learning models, in particular, multi-layer neural networks, have achieved state-of-the-art performance in a wide range of applications. One of the major advantages of this new generation of machine learning models is their ability to automatically extract powerful representations of the input data – Many of the representations are even considered to be much better than humanly constructed ones. However, from a theoretical point of view, what representations can be effectively learned by those models? – Despite the great power of neural networks to represent a large variety of functions, it is unclear that those representations can actually be learned by minimizing the simple training objectives, especially considering heuristic local search algorithms such as gradient descent are typically used for model training. Moreover, representation learning heavily depends on the structure of the input data, however, we have only simplistic tools to measure, quantify, and understand data. In light of rapid progress and rapidly shifting understanding, we believe that the time is ripe for a workshop focusing on the theory of representation learning.
We invited world-leading experts in Machine Learning and Theory to give talks in the workshop. Each talk is 45min long.
We also include two-panel discussions at the end of each day of the workshop. Experts will be discussing the exciting future directions and challenges
The workshop will be hosted using a hybrid mode. People can choose to come on-site or join the workshop remotely via Zoom (Link).