Skip to content

Commit

Permalink
Update licensed docs (#13405)
Browse files Browse the repository at this point in the history
* Added links to Python API

* Updated licensed utility and helpers docs

* Added ModelTracer to utility and helper page

* Added AnnotationMerger page

* Added BertSentenceChunkEmbeddings page

* Added ChunkMapper and ChunkConverter pages

* Added DateNormalizer page

* Added ChunkMapperFilterer page

* Added ChunkMapperFilterer page

* Added Doc2ChunkInternal page

* Added DocumentHashCoder page

* Added ZeroShotNerModel page

* Added ZeroShotRelationExtractionModel page

* Fix Python API link for CoNLL dataset page

* Added ChunkMapperFilterer page

* Added ChunkSentenceSplitter page

* Added ChunkSentenceSplitter page

* Added AssertionChunkConverter page

* Add Python API link to license annotator template

* Added Risk Adjustments Score Calculation page

* Added missing parameters on Python code

* Added .vscode to gitignore

* Updated licensed annotators docs

* Added ChunkEntityResolver page

---------

Co-authored-by: Christian Kasim Loan <christian.kasim.loan@gmail.com>
Co-authored-by: Maziyar Panahi <maziyar.panahi@iscpif.fr>
  • Loading branch information
3 people authored Feb 15, 2023
1 parent c173bf3 commit 1019eab
Show file tree
Hide file tree
Showing 7 changed files with 360 additions and 10 deletions.
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -332,3 +332,6 @@ src/*/resources/*.classes
.bsp/sbt.json
python/docs/_build/**
python/docs/reference/_autosummary/**

# MS Visio Code
**/.vscode/
6 changes: 6 additions & 0 deletions docs/en/licensed_annotator_entries/AssertionChunkConverter.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,12 @@ model

This annotator creates a `CHUNK` column with metadata useful for training an Assertion Status Detection model (see [AssertionDL](https://nlp.johnsnowlabs.com/docs/en/licensed_annotators#assertiondl)).

In some cases, there may be issues while creating the chunk column when using token indices that can lead to loss of data to train assertion status models.

The `AssertionChunkConverter` annotator uses both begin and end indices of the tokens as input to add a more robust metadata to the chunk column in a way that improves the reliability of the indices and avoid loss of data.

> *NOTE*: Chunk begin and end indices in the assertion status model training dataframe can be populated using the new version of ALAB module.
{%- endcapture -%}

{%- capture model_input_anno -%}
Expand Down
217 changes: 217 additions & 0 deletions docs/en/licensed_annotator_entries/EntityChunkEmbeddings.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,217 @@
{%- capture title -%}
EntityChunkEmbeddings
{%- endcapture -%}

{%- capture model -%}
model
{%- endcapture -%}

{%- capture model_description -%}
Weighted average embeddings of multiple named entities chunk annotations.

Entity Chunk Embeddings uses BERT Sentence embeddings to compute a weighted average vector represention of related entity chunks. The input the model consists of chunks of recognized named entities. One or more entities are selected as target entities and for each of them a list of related entities is specified (if empty, all other entities are assumed to be related).

The model looks for chunks of the target entities and then tries to pair each target entity (e.g. DRUG) with other related entities (e.g. DOSAGE, STRENGTH, FORM, etc). The criterion for pairing a target entity with another related entity is that they appear in the same sentence and the maximal syntactic distance is below a predefined threshold.

The relationship between target and related entities is one-to-many, meaning that if there multiple instances of the same target entity (e.g.) within a sentence, the model will map a related entity (e.g. DOSAGE) to at most one of the instances of the target entity. For example, if there is a sentence "The patient was given 125 mg of paracetamol and metformin", the model will pair "125 mg" to "paracetamol", but not to "metformin".

The output of the model is an average embeddings of the chunks of each of the target entities and their related entities. It is possible to specify a particular weight for each entity type.

An entity can be defined both as target a entity and as a related entity for some other target entity. For example, we may want to compute the embeddings of SYMPTOMs and their related entities, as well as the embeddings of DRUGs and their related entities, one of each is also SYMPTOM. In such cases, it is possible to use the TARGET_ENTITY:RELATED_ENTITY notation to specify the weight of an related entity (e.g. "DRUG:SYMPTOM" to set the weight of SYMPTOM when it appears as an related entity to target entity DRUG). The relative weights of entities for particular entity chunk embeddings are available in the annotations metadata.

This model is a subclass of `BertSentenceEmbeddings` and shares all parameters
with it. It can load any pretrained `BertSentenceEmbeddings` model.

The default model is `"sbiobert_base_cased_mli"` from `clinical/models`.
Other available models can be found at [Models Hub](https://nlp.johnsnowlabs.com/models?task=Embeddings).

{%- endcapture -%}

{%- capture model_input_anno -%}
DEPENDENCY, CHUNK
{%- endcapture -%}

{%- capture model_output_anno -%}
SENTENCE_EMBEDDINGS
{%- endcapture -%}

{%- capture model_python_medical -%}
import sparknlp
from sparknlp.base import *
from sparknlp_jsl.common import *
from sparknlp.annotator import *
from sparknlp.training import *
import sparknlp_jsl
from sparknlp_jsl.base import *
from sparknlp_jsl.annotator import *
from pyspark.ml import Pipeline

documenter = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("documents")
sentence_detector = SentenceDetector() \
.setInputCols("documents") \
.setOutputCol("sentences")
tokenizer = Tokenizer() \
.setInputCols("sentences") \
.setOutputCol("tokens")
embeddings = WordEmbeddingsModel() \
.pretrained("embeddings_clinical", "en", "clinical/models")\
.setInputCols(["sentences", "tokens"])\
.setOutputCol("embeddings")
ner_model = MedicalNerModel()\
.pretrained("ner_posology_large", "en", "clinical/models")\
.setInputCols(["sentences", "tokens", "embeddings"])\
.setOutputCol("ner")
ner_converter = NerConverterInternal()\
.setInputCols("sentences", "tokens", "ner")\
.setOutputCol("ner_chunks")
pos_tager = PerceptronModel()\
.pretrained("pos_clinical", "en", "clinical/models")\
.setInputCols("sentences", "tokens")\
.setOutputCol("pos_tags")
dependency_parser = DependencyParserModel()\
.pretrained("dependency_conllu", "en")\
.setInputCols(["sentences", "pos_tags", "tokens"])\
.setOutputCol("dependencies")
drug_chunk_embeddings = EntityChunkEmbeddings()\
.pretrained("sbiobert_base_cased_mli","en","clinical/models")\
.setInputCols(["ner_chunks", "dependencies"])\
.setOutputCol("drug_chunk_embeddings")\
.setMaxSyntacticDistance(3)\
.setTargetEntities({"DRUG": []})
.setEntityWeights({"DRUG": 0.8, "STRENGTH": 0.2, "DOSAGE": 0.2, "FORM": 0.5})
sampleData = "The parient was given metformin 125 mg, 250 mg of coumadin and then one pill paracetamol"
data = SparkContextForTest.spark.createDataFrame([[sampleData]]).toDF("text")
pipeline = Pipeline().setStages([
documenter,
sentence_detector,
tokenizer,
embeddings,
ner_model,
ner_converter,
pos_tager,
dependency_parser,
drug_chunk_embeddings])
results = pipeline.fit(data).transform(data)
results = results \
.selectExpr("explode(drug_chunk_embeddings) AS drug_chunk") \
.selectExpr("drug_chunk.result", "slice(drug_chunk.embeddings, 1, 5) AS drug_embedding") \
.cache()
results.show(truncate=False)
+-----------------------------+-----------------------------------------------------------------+
| result| drug_embedding"|
+-----------------------------+-----------------------------------------------------------------+
|metformin 125 mg |[-0.267413, 0.07614058, -0.5620966, 0.83838946, 0.8911504] |
|250 mg coumadin |[0.22319649, -0.07094894, -0.6885556, 0.79176235, 0.82672405] |
|one pill paracetamol |[-0.10939768, -0.29242, -0.3574444, 0.3981813, 0.79609615] |
+-----------------------------+-----------------------------------------------------------------+
{%- endcapture -%}

{%- capture model_scala_medical -%}
import spark.implicits._
import com.johnsnowlabs.nlp.base.DocumentAssembler
import com.johnsnowlabs.nlp.annotator.SentenceDetector
import com.johnsnowlabs.nlp.annotators.parser.dep.DependencyParserModel
import com.johnsnowlabs.nlp.annotators.pos.perceptron.PerceptronModel
import com.johnsnowlabs.nlp.annotators.ner.{MedicalNerModel, NerConverterInternal}
import com.johnsnowlabs.nlp.annotators.embeddings.EntityChunkEmbeddings
import org.apache.spark.ml.Pipeline

val documentAssembler = new DocumentAssembler()
.setInputCol("text")
.setOutputCol("document")

val sentenceDetector = new SentenceDetector()
.setInputCols("document")
.setOutputCol("sentence")

val tokenizer = new Tokenizer()
.setInputCols("sentence")
.setOutputCol("tokens")

val wordEmbeddings = WordEmbeddingsModel
.pretrained("embeddings_clinical", "en", "clinical/models")
.setInputCols(Array("sentences", "tokens"))
.setOutputCol("word_embeddings")

val nerModel = MedicalNerModel
.pretrained("ner_posology_large", "en", "clinical/models")
.setInputCols(Array("sentence", "tokens", "word_embeddings"))
.setOutputCol("ner")

val nerConverter = new NerConverterInternal()
.setInputCols("sentence", "tokens", "ner")
.setOutputCol("ner_chunk")

val posTager = PerceptronModel
.pretrained("pos_clinical", "en", "clinical/models")
.setInputCols("sentences", "tokens")
.setOutputCol("pos_tags")

val dependencyParser = DependencyParserModel
.pretrained("dependency_conllu", "en")
.setInputCols(Array("sentences", "pos_tags", "tokens"))
.setOutputCol("dependencies")

val drugChunkEmbeddings = EntityChunkEmbeddings
.pretrained("sbiobert_base_cased_mli","en","clinical/models")
.setInputCols(Array("ner_chunks", "dependencies"))
.setOutputCol("drug_chunk_embeddings")
.setMaxSyntacticDistance(3)
.setTargetEntities(Map("DRUG" -> List()))
.setEntityWeights(Map[String, Float]("DRUG" -> 0.8f, "STRENGTH" -> 0.2f, "DOSAGE" -> 0.2f, "FORM" -> 0.5f))

val pipeline = new Pipeline()
.setStages(Array(
documentAssembler,
sentenceDetector,
tokenizer,
wordEmbeddings,
nerModel,
nerConverter,
posTager,
dependencyParser,
drugChunkEmbeddings))

val sampleText = "The patient was given metformin 125 mg, 250 mg of coumadin and then one pill paracetamol."

val testDataset = Seq("").toDS.toDF("text")
val result = pipeline.fit(emptyDataset).transform(testDataset)

result
.selectExpr("explode(drug_chunk_embeddings) AS drug_chunk")
.selectExpr("drug_chunk.result", "slice(drug_chunk.embeddings, 1, 5) AS drugEmbedding")
.show(truncate=false)

+-----------------------------+-----------------------------------------------------------------+
| result| drugEmbedding|
+-----------------------------+-----------------------------------------------------------------+
|metformin 125 mg |[-0.267413, 0.07614058, -0.5620966, 0.83838946, 0.8911504] |
|250 mg coumadin |[0.22319649, -0.07094894, -0.6885556, 0.79176235, 0.82672405] |
|one pill paracetamol |[-0.10939768, -0.29242, -0.3574444, 0.3981813, 0.79609615] |
+-----------------------------+----------------------------------------------------------------+

{%- endcapture -%}

{%- capture model_api_link -%}
[EntityChunkEmbeddingsModel](https://nlp.johnsnowlabs.com/licensed/api/com/johnsnowlabs/nlp/annotators/embeddings/EntityChunkEmbeddings.html)
{%- endcapture -%}


{%- capture model_python_api_link -%}
[EntityChunkEmbeddingsModel](https://nlp.johnsnowlabs.com/licensed/api/python/reference/autosummary/sparknlp_jsl/annotator/embeddings/entity_chunk_embeddings/index.html#sparknlp_jsl.annotator.embeddings.entity_chunk_embeddings.EntityChunkEmbeddings)
{%- endcapture -%}


{% include templates/licensed_approach_model_medical_fin_leg_template.md
title=title
model=model
model_description=model_description
model_input_anno=model_input_anno
model_output_anno=model_output_anno
model_python_medical=model_python_medical
model_scala_medical=model_scala_medical
model_api_link=model_api_link
model_python_api_link=model_python_api_link
%}
5 changes: 4 additions & 1 deletion docs/en/licensed_annotator_entries/NerConverterInternal.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,10 @@ model
{%- capture model_description -%}
Converts a IOB or IOB2 representation of NER to a user-friendly one,
by associating the tokens of recognized entities and their label.
Chunks with no associated entity (tagged "O") are filtered.
Chunks with no associated entity (tagged "O") are filtered out.

This licensed annotator adds extra functionality to the open-source version by adding the following parameters: `blackList`, `greedyMode`, `threshold`, and `ignoreStopWords` that are not available in the [NerConverter](https://nlp.johnsnowlabs.com/docs/en/annotators#nerconverter) annotator.

See also [Inside–outside–beginning (tagging)](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)) for more information.
{%- endcapture -%}

Expand Down
10 changes: 8 additions & 2 deletions docs/en/licensed_annotator_entries/RENerChunksFilter.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,14 @@ model
{%- endcapture -%}

{%- capture model_description -%}
Filters and outputs combinations of relations between extracted entities, for further processing.
This annotator is especially useful to create inputs for the RelationExtractionDLModel.
Filters entities' dependency relations.

The annotator filters desired relation pairs (defined by the parameter realtionPairs), and store those on the output column.

Filtering the possible relations can be useful to perform additional analysis for a specific use case (e.g., checking adverse drug reactions and drug realations), which can be the input for further analysis using a pretrained `RelationExtractionDLModel`.

For example, the [ner_clinical](https://nlp.johnsnowlabs.com/2021/03/31/ner_clinical_en.html) NER model can identify `PROBLEM`, `TEST`, and `TREATMENT` entities. By using the `RENerChunksFilter`, one can filter only the relations between `PROBLEM` and `TREATMENT` entities only, removing any relation between the other entities, to further analyze the associations between clinical problems and treatments.

{%- endcapture -%}

{%- capture model_input_anno -%}
Expand Down
14 changes: 9 additions & 5 deletions docs/en/licensed_annotator_entries/RelationExtraction.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,11 +11,11 @@ model
{%- endcapture -%}

{%- capture model_description -%}
Extracts and classifies instances of relations between named entities. For this, relation pairs
need to be defined with `setRelationPairs`, to specify between which entities the extraction should be done.
Extracts and classifies instances of relations between named entities.

For pretrained models please see the
[Models Hub](https://nlp.johnsnowlabs.com/models?task=Relation+Extraction) for available models.

{%- endcapture -%}

{%- capture model_input_anno -%}
Expand Down Expand Up @@ -220,9 +220,13 @@ val result = pipeline.fit(data).transform(data)
{%- endcapture -%}

{%- capture approach_description -%}
Trains a TensorFlow model for relation extraction. The Tensorflow graph in `.pb` format needs to be specified with
`setModelFile`. The result is a RelationExtractionModel.
To start training, see the parameters that need to be set in the Parameters section.
Trains a TensorFlow model for relation extraction.

To train a custom relation extraction model, you need to first creat a Tensorflow graph using either the `TfGraphBuilder` annotator or the `tf_graph` module. Then, set the path to the Tensorflow graph using the method `.setModelFile("path/to/tensorflow_graph.pb")`.

If the parameter `relationDirectionCol` is set, the model will be trained using the direction information (see the parameter decription for details). Otherwise, the model won't have direction between the relation of the entities.

After training a model (using the `.fit()` method), the resulting object is of class `RelationExtractionModel`.
{%- endcapture -%}

{%- capture approach_input_anno -%}
Expand Down
Loading

0 comments on commit 1019eab

Please sign in to comment.