Le lundi 27 février 2012 à 15:00 - UM2 - Bât 09 - Salle 331 (3ème étage)Gilles Celeux
Mixture models can be used to deal with the simultaneous clustering problem on a set of objects and a set of variables. The latent block model defines a distribution for each combination of an object-label and a variable-label, and the data are supposed to be independent given the object labels and the variable labels. The impossibility to numerically factorize the joint distribution of the labels, conditionally to the observed data, makes intractable the E-step of the EM algorithm and approximations are needed to perform it. In the binary case, two algorithms were proposed. The first one uses a variational approximation and the second one uses a Gibbs sampler for the estimation step. These algorithms tend to empty some classes. Two Bayesian versions are presented here to attenuate this problem and the influence of the prior distribution on the classication is studied.