Nonlinear Structural Equation Model Guided Gaussian Mixture Hierarchical Topic Modeling

HeGang Chen, Pengbo Mao, Yuyin Lu, Yanghui Rao

Main: Interpretability and Analysis of Models for NLP Main-poster Paper

Session 1: Interpretability and Analysis of Models for NLP (Virtual Poster)
Conference Room: Pier 7&8
Conference Time: July 10, 11:00-12:30 (EDT) (America/Toronto)
Global Time: July 10, Session 1 (15:00-16:30 UTC)
Keywords: topic modeling
TLDR: Hierarchical topic models, which can extract semantically meaningful topics from a text corpus in an unsupervised manner and automatically organise them into a topic hierarchy, have been widely used to discover the underlying semantic structure of documents. However, the existing models often assume...
You can open the #paper-P3860 channel in a separate window.
Abstract: Hierarchical topic models, which can extract semantically meaningful topics from a text corpus in an unsupervised manner and automatically organise them into a topic hierarchy, have been widely used to discover the underlying semantic structure of documents. However, the existing models often assume in the prior that the topic hierarchy is a tree structure, ignoring symmetrical dependencies between topics at the same level. Moreover, the sparsity of text data often complicate the analysis. To address these issues, we propose NSEM-GMHTM as a deep topic model, with a Gaussian mixture prior distribution to improve the model's ability to adapt to sparse data, which explicitly models hierarchical and symmetric relations between topics through the dependency matrices and nonlinear structural equations. Experiments on widely used datasets show that our NSEM-GMHTM generates more coherent topics and a more rational topic structure when compared to state-of-theart baselines. Our code is available at https: //github.com/nbnbhwyy/NSEM-GMHTM.