15

I just finished the Bayesian Analysis in Python book by Osvaldo Martin (great book to understand bayesian concepts and some fancy numpy indexing).

I really want to extend my understanding to bayesian mixture models for unsupervised clustering of samples. All of my google searches have led me to Austin Rochford's tutorial which is really informative. I understand what is happening but I am unclear in how this can be adapted to clustering (especially using multiple attributes for the cluster assignments but that is a different topic).

I understand how to assign the priors for the Dirichlet distribution but I can't figure out how to get the clusters in PyMC3. It looks like the majority of the mus converge to the centroids (i.e. the means of the distributions I sampled from) but they are still separate components. I thought about making a cutoff for the weights (w in the model) but that doesn't seem to work the way I imagined since multiple components have slightly different mean parameters mus that are converging.

How can I extract the clusters (centroids) from this PyMC3 model? I gave it a maximum of 15 components that I want to converge to 3. The mus seem to be at the right location but the weights are messed up b/c they are being distributed between the other clusters so I can't use a weight threshold (unless I merge them but I don't think that's the way it is normally done).

import pymc3 as pm
import numpy as np
import matplotlib.pyplot as plt
import multiprocessing
import seaborn as sns
import pandas as pd
import theano.tensor as tt
%matplotlib inline

# Clip at 15 components
K = 15

# Create mixture population
centroids = [0, 10, 50]
weights = [(2/5),(2/5),(1/5)]

mix_3 = np.concatenate([np.random.normal(loc=centroids[0], size=int(150*weights[0])), # 60 samples
                        np.random.normal(loc=centroids[1], size=int(150*weights[1])), # 60 samples
                        np.random.normal(loc=centroids[2], size=int(150*weights[2]))])# 30 samples
n = mix_3.size

enter image description here

# Create and fit model
with pm.Model() as Mod_dir:
    alpha = pm.Gamma('alpha', 1., 1.)

    beta = pm.Beta('beta', 1., alpha, shape=K)

    w = pm.Deterministic('w', beta * tt.concatenate([[1], tt.extra_ops.cumprod(1 - beta)[:-1]]))

    component = pm.Categorical('component', w, shape=n)

    tau = pm.Gamma("tau", 1.0, 1.0, shape=K)

    mu = pm.Normal('mu', 0, tau=tau, shape=K)

    obs = pm.Normal('obs',
                    mu[component], 
                    tau=tau[component],
                    observed=mix_3)

    step1 = pm.Metropolis(vars=[alpha, beta, w, tau, mu, obs])
#     step2 = pm.CategoricalGibbsMetropolis(vars=[component])
    step2 = pm.ElemwiseCategorical([component], np.arange(K)) # Much, much faster than the above

    tr = pm.sample(1e4, [step1, step2], njobs=multiprocessing.cpu_count())

#burn-in = 1000, thin by grabbing every 5th idx
pm.traceplot(tr[1e3::5])

enter image description here

Similar questions below

https://stats.stackexchange.com/questions/120209/pymc3-dirichlet-distribution for regression and not clustering

https://stats.stackexchange.com/questions/108251/image-clustering-and-dirichlet-process theory on the DP process

https://stats.stackexchange.com/questions/116311/draw-a-multinomial-distribution-from-a-dirichlet-distribution explains DP

Dirichlet process in PyMC 3 directs me to Austin Rochford's tutorial above

Community
  • 1
  • 1
O.rka
  • 24,289
  • 52
  • 152
  • 253
  • 1
    Edward might have a examples using variational inference for dirichlet process mixtures. http://edwardlib.org/ – rafaelvalle Jan 11 '17 at 00:56
  • Ill check it out and see if I can figure out how to port it! Thanks. I've never heard of edward but seems cool so far. – O.rka Jan 12 '17 at 17:50
  • Is this what your looking for? https://pymc-devs.github.io/pymc3/notebooks/dp_mix.html – rafaelvalle Jan 18 '17 at 19:29
  • @rafaelvalle I linked that above as Austin Rochford's tutorial above. It explains how to use the Dirichlet Process but it doesn't explain how to use this for clustering. I tried working out the tutorial step by step and tweaking it at the last step to get the # of clusters but I couldn't get it to work. – O.rka Jan 18 '17 at 20:58

1 Answers1

9

Using a couple of new-ish additions to pymc3 will help make this clear. I think I updated the Dirichlet Process example after they were added, but it seems to have been reverted to the old version during a documentation cleanup; I will fix that soon.

One of the difficulties is that the data you have generated is much more dispersed than the priors on the component means can accommodate; if you standardize your data, the samples should mix much more quickly.

The second is that pymc3 now supports mixture distributions where the indicator variable component has been marginalized out. These marginal mixture distributions will help accelerate mixing and allow you to use NUTS (initialized with ADVI).

Finally, with these truncated versions of infinite models, when encountering computational problems, it is often useful to increase the number of potential components. I have found that K = 30 works better for this model than K = 15.

The following code implements these changes and shows how the "active" component means can be extracted.

from matplotlib import pyplot as plt
import numpy as np
import pymc3 as pm
import seaborn as sns
from theano import tensor as T

blue = sns.color_palette()[0]

np.random.seed(462233) # from random.org

N = 150

CENTROIDS = np.array([0, 10, 50])
WEIGHTS = np.array([0.4, 0.4, 0.2])

x = np.random.normal(CENTROIDS[np.random.choice(3, size=N, p=WEIGHTS)], size=N)
x_std = (x - x.mean()) / x.std()

fig, ax = plt.subplots(figsize=(8, 6))

ax.hist(x_std, bins=30);

Standardized data

K = 30

with pm.Model() as model:
    alpha = pm.Gamma('alpha', 1., 1.)
    beta = pm.Beta('beta', 1., alpha, shape=K)
    w = pm.Deterministic('w', beta * T.concatenate([[1], T.extra_ops.cumprod(1 - beta)[:-1]]))

    tau = pm.Gamma('tau', 1., 1., shape=K)
    lambda_ = pm.Uniform('lambda', 0, 5, shape=K)
    mu = pm.Normal('mu', 0, tau=lambda_ * tau, shape=K)
    obs = pm.NormalMixture('obs', w, mu, tau=lambda_ * tau,
                           observed=x_std)

with model:
    trace = pm.sample(2000, n_init=100000)

fig, ax = plt.subplots(figsize=(8, 6))

ax.bar(np.arange(K) - 0.4, trace['w'].mean(axis=0));

We see that three components appear to be used, and that their weights are reasonably close to the true values.

Mixture weights

Finally, we see that the posterior expected means of these three components match the true (standardized) means fairly well.

trace['mu'].mean(axis=0)[:3]

array([-0.73763891, -0.17284594, 2.10423978])

(CENTROIDS - x.mean()) / x.std()

array([-0.73017789, -0.16765707, 2.0824262 ])

  • Wow, this is incredible. I haven't seen `pm.NormalMixture` yet but I like it! It's interesting how much better this performs with `tau*lambda_` than just `tau`. I'll need to brush up on my statistics a bit. One last question, if you didn't know there were 3 clusters would you just set a cutoff for the weights (e.g. anything over 1e-3 is a a cluster)? If so, do you recommend a good rule-of-thumb for determining a cutoff? Thank you again, this is very useful. – O.rka Jan 31 '17 at 22:36
  • 1
    That is probably what I would do, unfortunately I don't really have a good rule of thumb. – Austin Rochford Feb 01 '17 at 00:42
  • 1
    Also, the [`pymc3` documentation](http://pymc-devs.github.io/pymc3/notebooks/dp_mix.html) has been updated with these changes. – Austin Rochford Feb 01 '17 at 00:48
  • I found [Marginalized Gaussian Mixture Model](https://pymc-devs.github.io/pymc3/notebooks/marginalized_gaussian_mixture_model.html). Is the biggest difference between this and the infinite dirichlet process that you use the stick breaking process to aid in finding the total number of clusters? – O.rka Feb 01 '17 at 00:57
  • 2
    Yes, that is the only real difference. – Austin Rochford Feb 01 '17 at 00:59
  • when this is used for clustering, would you do you something like `stats.norm(params).pdf(x)` and then find the component with the largest likelihood? – O.rka Feb 01 '17 at 22:35
  • 1
    I'm not entirely sure what you're trying to do, but finding the component with the largest _weight_ should be possible using `trace['w']`. – Austin Rochford Feb 02 '17 at 00:10
  • Apologies, I think I explained it poorly. I've often seen that this method can be used as an unsupervised clustering technique. Since the dirichlet process with gaussian models finds the best amount of components to model a mixture distribution, I was wondering the best way to extend this approach towards categorizing/clustering input data into the components that the dirichlet process discovers. My first thought was to look at the likelihood it each data point lands in a particular component distribution. – O.rka Feb 02 '17 at 03:21
  • I would use Bayes' theorem here: P(w | x) = P(x | w) P(w) / P(x). All three of the terms from the right hand side should be (approximately) calculable using the posterior samples, I think. – Austin Rochford Feb 02 '17 at 12:32