HKU HKU Dept of Statistics & Actuarial Science, HKU

Seminar by Dr. Jiajun SHEN from Research Engineering Team Lead, Google DeepMind London

DateWednesday, 5 October 2022
Time2:00 p.m. – 3:00 p.m.
VenueRoom 301, Run Run Shaw Building, HKU
TitleTowards large-scale continual learning

Intelligent agents worthy of the name must continue to learn indefinitely and adapt to change efficiently: for instance they need to adapt to change in the environment or change in the computation versus accuracy trade-off. Even modern large-scale models such as large vision and language models need to constantly adapt, to learn new tasks and to consolidate the knowledge present in the new batch of data back into the model in order to learn future tasks more efficiently.

Unfortunately, our most advanced deep-learning networks do not work well in the continual learning setting. To make things worse, there is no good benchmark to investigate the question of how to efficiently adapt and consolidate knowledge in such a setting. In this talk, I will talk about the continual learning problem in deep neural networks. I will provide an overview of NEVIS, a new benchmark which consists of a stream of very challenging and diverse visual classification tasks. I will then discuss the preliminary results we obtained using a variety of baseline approaches.

NEVIS will be released in about a month, and it is meant to motivate researchers working in continual learning, meta-learning and auto-ml to join forces and to make strides together towards the development of robust systems that can become more apt and efficient over time.

About the speaker

Jiajun Shen is a research engineering team lead at Google DeepMind London. He obtained his PhD in Computer Science at the University of Chicago with Yali Amit. After PhD, he joined Facebook and was a research scientist at Facebook AI Research. In 2020, he joined TCL as the chief AI scientist at TCL Research, during which time he also served as the committee member at the HKU-TCL Joint Research Centre for AI. He recently joined DeepMind in May and since been working on continual learning.