On the DCI Framework for Evaluating Disentangled Representations: Extensions and Connections to Identifiability

Abstract

In representation learning, a common approach is to seek representations which disentangle the underlying factors of variation. Eastwood and Williams (2018) proposed three metrics for quantifying the quality of such disentangled representations: disentanglement (D), completeness (C) and informativeness (I). We provide several extensions of this DCI framework by considering the functional capacity required to use a representation. In particular, we establish links to identifiability, point out how D and C can be computed for black-box predictors, and introduce two new measures of representation quality: explicitness (E), derived from a representation’s loss-capacity curve, and size (S) relative to the ground truth. We illustrate the relevance of our extensions on the MPI3D-Real dataset.

Publication
UAI 2022 Workshop on Causal Representation Learning
Armin Kekić
Armin Kekić
PhD Student in Causality and Machine Learning

My interests include high-dimensional time series forecasting, machine learning, network science and quantum dynamics.