Intelligent agents worthy of the name must continue to learn indefinitely and adapt to
change efficiently: for instance they need to adapt to change in the environment or
change in the computation versus accuracy trade-off. Even modern large-scale models
such as large vision and language models need to constantly adapt, to learn new tasks
and to consolidate the knowledge present in the new batch of data back into the model
in order to learn future tasks more efficiently.
Unfortunately, our most advanced deep-learning networks do not work well in the
continual learning setting. To make things worse, there is no good benchmark to
investigate the question of how to efficiently adapt and consolidate knowledge in such a
setting. In this talk, I will talk about the continual learning problem in deep neural
networks. I will provide an overview of NEVIS, a new benchmark which consists of a
stream of very challenging and diverse visual classification tasks. I will then discuss the
preliminary results we obtained using a variety of baseline approaches.
NEVIS will be released in about a month, and it is meant to motivate researchers working
in continual learning, meta-learning and auto-ml to join forces and to make strides
together towards the development of robust systems that can become more apt and
efficient over time. |