Skip to Content

DIR-GAN: Generative Deep Learning for Image Registration with Applications to COPD

Team Members:
  • Deepan Islam
  • Andrew Gearhart, PhD
  • Jessica Dunleavey, PhD
  • Sarah Lee, MS


We present a fast deep learning-based framework for deformable, pairwise image registration (DIR-GAN). Whereas conventional registration methods require an iterative optimization algorithm which leads to a high runtime, we treat registration as a functional problem and develop a Generative Adversarial Network (GAN) to estimate the deformation vector field needed to align a pair of input scans. We parametrize the function through two adversarially trained convolutional neural networks and refine the model weights on a dataset of medical images. Given a new pair of scans, DIR-GAN can quickly and accurately, as measured by lower computational runtime and higher Dice scores respectively, estimate the appropriate deformation vector field needed to align that set of scans. Experiments on FIRE: Fundus Image Registration Dataset confirm that DIR-GAN solves the two-dimensional registration task and provides proof of concept for the use of generative models in registration. We will modify DIR-GAN to solve the three-dimensional registration problem on 3D computed tomography (CT) scans from OASIS-3 and if time permits, lung CT scans from COPD patients. The result from the three-dimensional registration would be integrated with a pipeline to allow for quantification of bronchial wall thickening and other downstream, clinically relevant subtasks.

Read the Johns Hopkins University privacy statement here.