Conditional Recurrent Flow: Conditional Generation of Longitudinal Samples with Applications to Neuroimaging.

Hwang, S., Z. Tao, W. Kim, and V. Singh. “Conditional Recurrent Flow: Conditional Generation of Longitudinal Samples With Applications to Neuroimaging.”. Proceedings. IEEE International Conference on Computer Vision, Vol. 2019, 2019, pp. 10691-00.

We develop a conditional generative model for longitudinal image datasets based on sequential invertible neural networks. Longitudinal image acquisitions are common in various scientific and biomedical studies where often each image sequence sample may also come together with various secondary (fixed or temporally dependent) measurements. The key goal is not only to estimate the parameters of a deep generative model for the given longitudinal data, but also to enable evaluation of how the temporal course of the generated longitudinal samples are influenced as a function of induced changes in the (secondary) temporal measurements (or events). Our proposed formulation incorporates recurrent subnetworks and temporal context gating, which provide a smooth transition in a temporal sequence of generated data that can be easily informed or modulated by secondary temporal conditioning variables. We show that the formulation works well despite the smaller sample sizes common in these applications. Our model is validated on two video datasets and a longitudinal Alzheimer’s disease (AD) dataset for both quantitative and qualitative evaluations of the generated samples. Further, using our generated longitudinal image samples, we show that we can capture the pathological progressions in the brain that turn out to be consistent with the existing literature, and could facilitate various types of downstream statistical analysis.

DOI: 10.1109/iccv.2019.01079

PubMed: 32405276