Given a sample corpus of biomedical text (such as pathology reports), this resource builds a long short-term memory (LSTM) model, a type of recurrent neural network (RNN), to automatically generate synthetic biomedical text of desired clinical context. This resource addresses the challenge of collecting labeled data, specifically clinical data, needed to create robust machine learning and deep learning models.
Data scientists interested in generating more examples of unstructured text with a specific label from a given corpus for training machine learning or deep learning models on clinical text.
Data scientists can train the provided untrained model on their own data or with preprocessed data of clinical pathology reports included with this resource. These reports came from the Surveillance, Epidemiology, and End Results (SEER) Program.
To use this resource, users must be familiar with natural language processing (NLP) and training neural networks, specifically RNNs.
Generative models of text is a known problem in the natural language processing community. The authors (of the model architecture provided in this resource) tested this model on unstructured clinical data from SEER. Data scientists can further optimize this architecture using the CANcer Distributed Learning Environment (CANDLE).
- Script to train a LSTM-based model: p3b2_baseline_keras2.py
- Data: The preprocessed training and test data of SEER clinical pathology reports are in the LSTM-based Clinical Text Generator asset in the Model and Data Clearinghouse (MoDaC).
Refer to this README.