Disentangling to Cluster

Gaussian Mixture Variational Ladder Autoencoders

Abstract

In clustering we normally output one cluster variable for each datapoint. However it is not necessarily the case that there is only one way to partition a given dataset into cluster components. For example, one could cluster objects by their colour, or by their type. Different attributes form a hierarchy, and we could wish to cluster in any of them. By disentangling the learnt latent representations of some dataset into different layers for different attributes we can then cluster in those latent spaces. We call this disentangled clustering. Extending Variational Ladder Autoencoders (Zhao et al., 2017), we propose a clustering algorithm, VLAC, that outperforms a Gaussian Mixture DGM in cluster accuracy over digit identity on the test set of SVHN. We also demonstrate learning clusters jointly over numerous layers of the hierarchy of latent variables for the data, and show component-wise generation from this hierarchical model.

Citation information

Willetts, M., Roberts, S., Holmes, C. (2019) Disentangling to Cluster: Gaussian Mixture Variational Ladder Autoencoders. NeurIPS Bayesian Deep Learning Workshop 2019

Turing affiliated authors