Skip to content

Latest commit

 

History

History
50 lines (31 loc) · 2.67 KB

File metadata and controls

50 lines (31 loc) · 2.67 KB

Generic badge Generic badge Generic badge Generic badge

Improving Centroid-Based Text Summarization through LDA-based Document Centroids

Automatic text summarization is the task of producing a text summary "from one or more texts, that conveys important information in the original text(s), and that is no longer than half of the original text(s) and usually, significantly less than that" \cite{summarization}.(Dragomir R Radev and McK-eown, 2002). We adapt a recent centroid-based text summarization model, one that takes advantage of the compositionality of word embeddings, in order to obtain a single vector representation of the most meaningful words in a given text. We propose utilizing Latent Dirichlet Allocation (LDA), a probabilistic generative model for collections of discrete data, in order to better obtain the topic words of a document for use in constructing the centroid vector. We see that the LDA implementation results in overall more coherent summaries, suggesting the potential for utilizing topic models to improve upon the general centroid-based method.

Our paper:

This work is based on:

Running the Code

  1. Download the Google Vectors from https://github.com/mmihaltz/word2vec-GoogleNews-vectors and place them into the data_clean folder.
  2. Copy all directories from duc2004\testdata\tasks1and2\t1.2\docs (DUC data not distributed in this repo due to licensing rescritions) to data_raw/articles
  3. Move files from duc2004\results\ROUGE\eval\peers\2 to data_raw/summaries
  4. Run data_raw/import_corpus.py
  5. Copy data_raw/corpus.pkl to cloned_summarizer/text_summarizer
  6. Models are avaliable in src. Example expirements avaliable in Evaluate_DUC.ipynb

Centroid Embeddings:

Our Proposed Change:

Sentence embeddings:

Centroid-sentence similarity:

Selection algorithm:

Rouge: