As described in our user guide for Latent Dirichlet Allocation (LDA), Hivemall enables you to apply clustering for your data based on a topic modeling technique. While LDA is one of the most popular techniques, there is another approach named Probabilistic Latent Semantic Analysis (pLSA). In fact, pLSA is the predecessor of LDA, but it has an advantage in terms of running time.
- T. Hofmann. Probabilistic Latent Semantic Indexing. SIGIR 1999, pp. 50-57.
- T. Hofmann. Probabilistic Latent Semantic Analysis. UAI 1999, pp. 289-296.
In order to efficiently handle large-scale data, our pLSA implementation is based on the following incremental variant of the original pLSA algorithm:
- H. Wu, et al. Incremental Probabilistic Latent Semantic Analysis for Automatic Question Recommendation. RecSys 2008, pp. 99-106.
This feature is supported from Hivemall v0.5-rc.1 or later.
Basically, you can use our pLSA function in a similar way to LDA.
In particular, we have two pLSA functions,
plsa_predict(). These functions can be used almost interchangeably with
lda_predict(). Thus, reading our user guide for LDA should be helpful before trying pLSA.
In short, for the sample
docs table we introduced in the LDA tutorial:
|1||"Fruits and vegetables are healthy."|
|2||"I like apples, oranges, and avocados. I do not like the flu or colds."|
a pLSA model can be built as follows:
with word_counts as ( select docid, feature(word, count(word)) as f from docs t1 lateral view explode(tokenize(doc, true)) t2 as word where not is_stopword(word) group by docid, word ), input as ( select docid, collect_list(f) as features from word_counts group by docid ) select train_plsa(features, '-topics 2 -eps 0.00001 -iter 2048 -alpha 0.01') as (label, word, prob) from input ;
And prediction can be done as:
test as ( select docid, word, count(word) as value from docs t1 LATERAL VIEW explode(tokenize(doc, true)) t2 as word where not is_stopword(word) group by docid, word ), topic as ( select t.docid, plsa_predict(t.word, t.value, m.label, m.prob, '-topics 2') as probabilities from test t JOIN plsa_model m ON (t.word = m.word) group by t.docid ) select docid, probabilities, probabilities.label, m.words -- topic each document should be assigned from topic t JOIN ( select label, collect_list(feature(word, prob)) as words from plsa_model group by label ) m on t.probabilities.label = m.label ;
Difference with LDA
The main advantage of using pLSA is its efficiency. Since mathematical formulation and optimization logic is much simpler than LDA, using pLSA generally requires much shorter running time.
In terms of accuracy, LDA could be better than pLSA. For example, a word
like appears twice in the above sample document#2 gets larger probabilities both in topic#1 and #2, even though one document does not contain the word. By contrast, LDA results (i.e., lambda values) are more clearly separated as shown in the LDA page. Thus, a pLSA model is likely to be biased.
For the reasons that we mentioned above, we recommend you to first use LDA. After that, if you encountered problems such as slow running time and undesirable clustering results, let you try alternative pLSA approach.
For training pLSA, we set a hyper-parameter
alpha in the above example:
SELECT train_plsa(feature, '-topics 2 -eps 0.00001 -iter 2048 -alpha 0.01')
This value controls how much iterative model update is affected by the old results.
From an algorithmic point of view, training pLSA (and LDA) iteratively repeats certain operations and updates the target value (i.e., probability obtained as a result of
train_plsa()). This iterative procedure gradually makes the probabilities more accurate. What
alpha does is to control the degree of the change of probabilities in each step.
Importantly, pLSA is likely to overfit single mini-batch. As a result, could be particularly bad values (i.e., ), and
train_plsa() sometimes fails with an exception like:
Perplexity would be Infinity. Try different mini-batch size `-s`, larger `-delta` and/or larger `-alpha`.
In that case, you need to try different hyper-parameters to avoid overfitting as the exception suggests.
For instance, 20 newsgroups dataset which consists of 10906 realistic documents empirically requires the following options:
SELECT train_plsa(features, '-topics 20 -iter 10 -s 128 -delta 0.01 -alpha 512 -eps 0.1')
alpha is much larger than
0.01 which was used for the dummy data above. Let you keep in mind that an appropriate value of
alpha highly depends on the number of documents and mini-batch size.