While latent class models of various types arise in many statistical applications, it is often difficult to establish their identifiability. Focusing on models in which there is some structure of independence of some of the observed variables conditioned on hidden ones, we demonstrate a general approach for establishing identifiability, utilizing algebraic arguments. A theorem of J. Kruskal for a simple latent class model with finite state space lies at the core of our results, though we apply it to a diverse set of models. These include mixtures of both finite and non-parametric product distributions, hidden Markov models, and random graph models, and lead to a number of new results and improvements to old ones.
In the parametric setting we argue that the classical definition of identifiability is too strong, and should be replaced by the concept of generic identifiability. Generic identifiability implies that the set of non-identifiable parameters has zero measure, so that the model remains useful for inference. In particular, this sheds light on the properties of finite mixtures of Bernoulli products, which have been used for decades despite being known to be non-identifiable models. In the non-parametric setting, we again obtain identifiability only when certain restrictions are placed on the distributions that are mixed, but we explicitly describe the conditions.