Skip to Content

Improving Adversarial Task Transfer in Lifelong Learning Models via Domain Adaptation

2021
Team Members:
  • Yuta Kobayashi
  • William Xu
Advisors:
  • Joshua Vogelstein
  • Jayanta Day
  • Vivek Gopalakrishnan

Abstract:

Lifelong learning is a paradigm proposed to address catastrophic forgetting as new tasks are introduced and learned by a model. However, the effects of learning from and transferring to adversarial tasks are not well understood. We investigate several approaches in supervised and unsupervised domain adaptation as potential defense mechanisms to alleviate the detrimental effects of adversarial tasks on prior and future task performance. Lifelong learning in a simple environment can therefore be demonstrated using two simple tasks: Gaussian XOR and Gaussian R-XOR. We find that in simulated data scenarios, coherent registration methods that preserve the local topological structure of data improve prior task performance even with the presence of adversarial tasks.

 

Read the Johns Hopkins University privacy statement here.

Accept